repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
httpie/cli
|
api
| 1,516 |
Average speed is reported incorrectly for resumed downloads
|
## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Download a file using `http -dc <address> -o ./somefile`
2. Interrupt the download at a high percentage done (e.g. 95%)
3. Resume the download with the same command in Step 1
4. When it completes, observe the average speed printed
The code to compute average speed this divides _total_ file size by the time spent in the current (resumed) invocation of httpie. so the output is a much higher average speed than actual.
We should keep track of how much of the file was downloaded in the current invocation and use that as the numerator over here: https://github.com/httpie/httpie/blob/30a6f73ec806393d897247b4c7268832de811ff7/httpie/output/ui/rich_progress.py#L41
## Current result
When I download the last 10% of a file at 5MB/s, the printed average speed is 50MB/s (1/10 the file size => 10x the speed)
## Expected result
When I download the last 10% of a file at 5MB/s, the printed average speed should be 5MB/s
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
$ http --debug <COMPLETE ARGUMENT LIST THAT TRIGGERS THE ERROR>
<COMPLETE OUTPUT>
```
## Additional information, screenshots, or code examples
…
|
open
|
2023-06-27T11:46:39Z
|
2023-07-14T22:05:03Z
|
https://github.com/httpie/cli/issues/1516
|
[
"bug",
"new"
] |
sankalp-khare
| 1 |
microsoft/MMdnn
|
tensorflow
| 349 |
Cntk Parser has not supported operator [Slice]
|
Platform (like ubuntu 16.04/win10):
win10
Python version:
3.5
Source framework with version (like Tensorflow 1.4.1 with GPU):
CNTK-gpu 2.5.1
Destination framework with version (like CNTK 2.3 with GPU):
Keras 2.2.0
|
closed
|
2018-08-07T18:33:42Z
|
2018-08-11T07:19:28Z
|
https://github.com/microsoft/MMdnn/issues/349
|
[] |
rasemailcz
| 7 |
iperov/DeepFaceLab
|
deep-learning
| 513 |
RTX 2070 Super, can't extract faces.
|
Are RTX cards still not compatible with DFL? I'm using DeepFaceLabCUDA10.1AVX 10/14/19
How do I use DFL with RTX card?
|
closed
|
2019-12-06T06:27:59Z
|
2020-03-28T05:41:58Z
|
https://github.com/iperov/DeepFaceLab/issues/513
|
[] |
grexter4
| 4 |
manrajgrover/halo
|
jupyter
| 177 |
Suggestion: show GIF demo of the various spinners
|
## Description
Looking at the various spinner animations in this JSON file, it is difficult to really get how they will look:
https://github.com/sindresorhus/cli-spinners/blob/main/spinners.json
I would be useful to have a table showing for each one its name next to GIF showing it in action.
|
open
|
2023-05-11T12:38:13Z
|
2023-05-11T12:38:13Z
|
https://github.com/manrajgrover/halo/issues/177
|
[] |
MasterScrat
| 0 |
skypilot-org/skypilot
|
data-science
| 4,766 |
[Reservations] Only waits for reservations
|
Reservations can be much cheaper than the on-demand instances, once a reservation is purchased for a future period of time, a user wants `sky launch` to only wait the reservation to be ready and launch the job, not risking to get an on-demand cluster with a much higher price.
It could be done by:
1. Allowing a new value for the `prioritize_reservations` field in `~/.sky/config.yaml`: `reservation_only`
2. Allowing the `prioritize_reservations` to be specified in SkyPilot yaml, i.e. the experimental section.
|
open
|
2025-02-20T01:32:32Z
|
2025-02-20T01:36:38Z
|
https://github.com/skypilot-org/skypilot/issues/4766
|
[
"good first issue"
] |
Michaelvll
| 0 |
whitphx/streamlit-webrtc
|
streamlit
| 1,213 |
Inconsistent issue with streamlit-webrtc in streamlit app
|
A post in https://github.com/aiortc/aiortc/issues/85 says
> Maybe the root cause is something weird happening in the python selector generated by others things that share the same event loop.
---
Hypothesis:
* The WebRTC connection is properly closed as ICE failed under this network environment ("ICE connection state is closed")
* So the event loop is closed or stopped in weird state.
_Originally posted by @whitphx in https://github.com/whitphx/streamlit-webrtc/issues/552#issuecomment-987885401_
--------------------------------------------------------
A similar issue has occured again , it affects your component page in streamlit as well -
https://webrtc.streamlit.app/
I checked if STUN server (stun.l.google.com:19302) is down on this site
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
it seems that the stun server is working fine. Any idea what is causing the error? Please find attached logged screenshot of the error.
Do advise if this is something that can be fixed? maybe from asyncio lib? or what?

|
open
|
2023-03-08T07:21:46Z
|
2025-02-07T17:26:53Z
|
https://github.com/whitphx/streamlit-webrtc/issues/1213
|
[] |
araii
| 45 |
jonaswinkler/paperless-ng
|
django
| 291 |
Error consuming pdf
|
Setup: paperless-ng 0.9.11
Following error occurs with every single file, when paperless tries to consume:
08:06:27 [Q] ERROR Failed [2007-08-07 - Rechnung - Alice.pdf] - Unsupported mime type application/octet-stream of file 2007-08-07 - Rechnung - Alice.pdf : Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
res = f(*task["args"], **task["kwargs"])
File "/usr/src/paperless/src/documents/tasks.py", line 73, in consume_file
override_tag_ids=override_tag_ids)
File "/usr/src/paperless/src/documents/consumer.py", line 109, in try_consume_file
f"Unsupported mime type {mime_type} of file {self.filename}")
documents.consumer.ConsumerError: Unsupported mime type application/octet-stream of file 2007-08-07 - Rechnung - Alice.pdf
And the file will not be consumed into paperless.
It seems, the file is the problem.... but i can view the file and its content in my pdf-Viewer.
|
closed
|
2021-01-08T07:14:58Z
|
2021-01-08T12:27:10Z
|
https://github.com/jonaswinkler/paperless-ng/issues/291
|
[
"duplicate"
] |
andbez
| 1 |
holoviz/panel
|
jupyter
| 7,166 |
Template header_color option is ignored
|
#### ALL software version info
```
$ pip list | egrep 'panel|bokeh|param'
bokeh 3.5.1
ipywidgets_bokeh 1.6.0
panel 1.5.0b4
param 2.1.1
```
#### Description of expected behavior and the observed behavior
Both with `BootstrapTemplate` and `MaterialTemplate` I find that setting `header_color` seems to have no effect.
#### Complete, minimal, self-contained example code that reproduces the issue
```
import panel as pn
pn.extension()
pn.template.BootstrapTemplate(
title="Hello",
header_background="green",
header_color="red",
).servable()
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
<img width="916" alt="Screenshot 2024-08-19 at 20 48 27" src="https://github.com/user-attachments/assets/8da49425-3c92-4561-a015-e071b20a9427">
|
closed
|
2024-08-19T18:48:38Z
|
2024-08-27T14:54:51Z
|
https://github.com/holoviz/panel/issues/7166
|
[] |
cdeil
| 0 |
babysor/MockingBird
|
deep-learning
| 452 |
大家好,请问在打开web后点击录制出现【Uncaught Error】“recStart”未定义...是存在什么问题,应该怎么解决呢?
|
**Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
|
open
|
2022-03-13T00:57:05Z
|
2022-03-13T00:57:05Z
|
https://github.com/babysor/MockingBird/issues/452
|
[] |
JJJIANGJIANG
| 0 |
microsoft/qlib
|
deep-learning
| 1,880 |
"qlib/data/_libs/rolling" can not be import
|
## ❓ Questions and Help
We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue.
location in : qlib/data/_libs/
expanding, rolling can't import beacause of ".pyx" file.
|
open
|
2024-12-29T11:52:10Z
|
2024-12-30T09:02:01Z
|
https://github.com/microsoft/qlib/issues/1880
|
[
"question"
] |
Peakara
| 2 |
mljar/mercury
|
data-visualization
| 211 |
mercury adding widgets not working
|
Hi, I installed mercury using the command `pip install mercury` in Python 3.9.13
And then run the below code
`import mercury as mr`
`name = mr.Text(label="What is your name?")`
The above code receives an error:
`AttributeError: module 'mercury' has no attribute 'Text'`
Any help is appreciated. Thanks!
|
closed
|
2023-02-15T17:48:47Z
|
2023-02-15T17:56:54Z
|
https://github.com/mljar/mercury/issues/211
|
[] |
eluyutao
| 1 |
xuebinqin/U-2-Net
|
computer-vision
| 14 |
Can I specify object to be segmented?
|
Hello, thank you for this work.
This issue is as the following images:


Is there a way to choose object to be segmented? Or how do I keep the guitar along the sequence of images?
Thank you.
|
closed
|
2020-05-12T15:19:30Z
|
2020-05-14T23:40:32Z
|
https://github.com/xuebinqin/U-2-Net/issues/14
|
[] |
xamxixixo
| 2 |
microsoft/qlib
|
machine-learning
| 1,571 |
When can Qlib support MacOS Ventura(MacOS 13)?
|
## ❓ Questions and Help
I noticed the v0.9.2 already support MacOS Big Sur(MacOS 11), but my Mac's operating system is MacOS Ventura(MacOS 13), I try some methods to install Qlib on M1 architecture but there are always warning or error at runtime, so I sincerely hope developers can release the Qlib that is compatible with M1 architecture as soon as possible.
|
closed
|
2023-06-26T05:53:54Z
|
2023-12-08T03:02:03Z
|
https://github.com/microsoft/qlib/issues/1571
|
[
"question",
"stale"
] |
ikaroinory
| 4 |
praw-dev/praw
|
api
| 1,598 |
CrossPost Support
|
after my last post i got this response:
This is clearly not true. There is an attribute, submission.crosspost_parent, that provides the fullname of the parent. The actual parent submission can be obtained by the following code:
>>> sub = reddit.submission(url="https://www.reddit.com/r/WhyWomenLiveLonger/comments/k9v65k/reason_26351836/")
>>> parent = next(reddit.info([sub.crosspost_parent]))
>>> parent
Submission(id='k9j5iz')
---------------------------
i tried it and get this error:
AttributeError: 'Submission' object has no attribute 'crosspost_parent'
---------------------------
this is my code:
>>> sub = self.rc.submission(url=f"https://www.reddit.com{submission.permalink}")
>>> parent = next(self.rc.info([sub.crosspost_parent]))
|
closed
|
2020-12-10T03:37:46Z
|
2021-03-14T22:05:36Z
|
https://github.com/praw-dev/praw/issues/1598
|
[] |
Nepion
| 9 |
stanford-oval/storm
|
nlp
| 133 |
bing search error.
|
Hi,
Bing search cant generate report and many 403 error even there is working site.
> python examples/run_storm_wiki_gpt.py \
--output-dir ../storm/frontend/demo_light/DEMO_WORKING_DIR \
--retriever bing \
--do-research \
--do-generate-outline \
--do-generate-article \
--do-polish-article --search-top-k 200 --retrieve-top-k 200
Topic: how is the war stress and political situation in Middle East now ?
root : ERROR : Error occurs when searching query current political situation in Middle East: 'webPages'
root : ERROR : Error occurs when searching query recent conflicts in Middle East: 'webPages'
Error while requesting URL('https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/08/05/world/middleeast/israel-hamas-iran-retaliation.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/05/world/middleeast/israel-hamas-iran-retaliation.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/05/23/world/israel-gaza-war-hamas') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/05/23/world/israel-gaza-war-hamas'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/us-personnel-hurt-attack-against-base-iraq-officials-say-2024-08-05/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/us-personnel-hurt-attack-against-base-iraq-officials-say-2024-08-05/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/live/2024/02/20/world/israel-hamas-war-gaza-news') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/02/20/world/israel-hamas-war-gaza-news'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/05/06/world/israel-gaza-war-hamas') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/05/06/world/israel-gaza-war-hamas'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/03/08/world/israel-hamas-war-gaza-news') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/03/08/world/israel-hamas-war-gaza-news'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/biden-voices-hope-iran-will-stand-down-is-uncertain-2024-08-03/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/biden-voices-hope-iran-will-stand-down-is-uncertain-2024-08-03/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/2024/08/05/world/middleeast/iraq-us-troops-iran-attack.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/05/world/middleeast/iraq-us-troops-iran-attack.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/08/06/world/middleeast/lebanon-hezbollah-israel.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/06/world/middleeast/lebanon-hezbollah-israel.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/08/05/world/middleeast/iran-israel-attack-strikes-why.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/05/world/middleeast/iran-israel-attack-strikes-why.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/us-expresses-concern-over-escalating-middle-east-conflict-risk-2024-07-31/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/us-expresses-concern-over-escalating-middle-east-conflict-risk-2024-07-31/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/2024/08/01/world/middleeast/middle-east-israel-iran-hezbollah.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/01/world/middleeast/middle-east-israel-iran-hezbollah.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/08/05/world/middleeast/iran-israel-attack-strikes-why.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/05/world/middleeast/iran-israel-attack-strikes-why.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/us-personnel-hurt-attack-against-base-iraq-officials-say-2024-08-05/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/us-personnel-hurt-attack-against-base-iraq-officials-say-2024-08-05/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.politico.com/news/2024/07/29/us-war-worries-middle-east-00171680') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.politico.com/news/2024/07/29/us-war-worries-middle-east-00171680'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/07/31/world/middleeast/iran-lebanon-israel-war-assassination.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/07/31/world/middleeast/iran-lebanon-israel-war-assassination.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.politico.com/newsletters/national-security-daily/2024/08/05/two-possible-scenarios-for-an-iran-attack-against-israel-00172660') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.politico.com/newsletters/national-security-daily/2024/08/05/two-possible-scenarios-for-an-iran-attack-against-israel-00172660'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/02/02/world/us-iran-strikes-middle-east-news') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/02/02/world/us-iran-strikes-middle-east-news'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://thehill.com/newsletters/defense-national-security/4812642-us-seeks-to-limit-chances-of-larger-middle-east-war/') - HTTPStatusError("Client error '403 Forbidden' for url 'https://thehill.com/newsletters/defense-national-security/4812642-us-seeks-to-limit-chances-of-larger-middle-east-war/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/05/06/world/israel-gaza-war-hamas') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/05/06/world/israel-gaza-war-hamas'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/04/29/world/israel-gaza-war-hamas') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/04/29/world/israel-gaza-war-hamas'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.tandfonline.com/doi/full/10.1080/19448953.2021.1888251') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.tandfonline.com/doi/full/10.1080/19448953.2021.1888251'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.ohchr.org/en/statements/2024/08/un-human-rights-chief-risk-wider-conflict-middle-east') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.ohchr.org/en/statements/2024/08/un-human-rights-chief-risk-wider-conflict-middle-east'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/08/06/world/middleeast/lebanon-hezbollah-israel.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/06/world/middleeast/lebanon-hezbollah-israel.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/is-hezbollah-israel-conflict-about-spiral-2024-07-28/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/is-hezbollah-israel-conflict-about-spiral-2024-07-28/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/2024/07/31/world/middleeast/iran-lebanon-israel-war-assassination.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/07/31/world/middleeast/iran-lebanon-israel-war-assassination.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.politico.com/news/middle-east') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.politico.com/news/middle-east'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/g7-nations-urge-de-escalation-middle-east-amid-threat-broader-conflict-2024-08-05/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/g7-nations-urge-de-escalation-middle-east-amid-threat-broader-conflict-2024-08-05/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/2023/06/13/world/middleeast/egypt-opposition-talks.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2023/06/13/world/middleeast/egypt-opposition-talks.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.washingtonpost.com/world/2024/04/12/israel-hamas-war-news-gaza-palestine/') - ReadTimeout('The read operation timed out')
Error while requesting URL('https://www.reuters.com/world/middle-east/dont-bomb-beirut-us-leads-push-rein-israels-response-2024-07-29/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/dont-bomb-beirut-us-leads-push-rein-israels-response-2024-07-29/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/live/2024/03/13/world/israel-hamas-war-gaza-news') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/03/13/world/israel-hamas-war-gaza-news'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/03/20/world/israel-hamas-war-gaza-news') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/03/20/world/israel-hamas-war-gaza-news'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.arabnews.com/middleeast') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.arabnews.com/middleeast'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12829') - HTTPStatusError("Client error '403 Forbidden' for url 'https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12829'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.wsj.com/world/middle-east') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.wsj.com/world/middle-east'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://academic.oup.com/ia/article-abstract/98/2/689/6530475') - HTTPStatusError("Client error '403 Forbidden' for url 'https://academic.oup.com/ia/article-abstract/98/2/689/6530475'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.washingtonpost.com/world/2024/08/02/haniyeh-israel-ceasefire-middle-east/') - ReadTimeout('The read operation timed out')
Error while requesting URL('https://www.reuters.com/world/middle-east/middle-eastern-stocks-slump-us-recession-fears-regional-tensions-2024-08-05/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/middle-eastern-stocks-slump-us-recession-fears-regional-tensions-2024-08-05/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/2024/08/01/world/middleeast/middle-east-israel-iran-hezbollah.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/01/world/middleeast/middle-east-israel-iran-hezbollah.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/israel-palestinian-dispute-hinges-statehood-land-jerusalem-refugees-2023-10-10/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/israel-palestinian-dispute-hinges-statehood-land-jerusalem-refugees-2023-10-10/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.reuters.com/world/middle-east/pentagon-tells-israel-it-will-adjust-us-troops-middle-east-2024-08-02/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/pentagon-tells-israel-it-will-adjust-us-troops-middle-east-2024-08-02/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.wsj.com/world/middle-east/a-guide-to-the-middle-easts-growing-conflicts-in-six-maps-2ea0c0da') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.wsj.com/world/middle-east/a-guide-to-the-middle-easts-growing-conflicts-in-six-maps-2ea0c0da'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://crsreports.congress.gov/product/pdf/IF/IF11726/1') - HTTPStatusError("Client error '403 Forbidden' for url 'https://crsreports.congress.gov/product/pdf/IF/IF11726/1'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2021/05/12/world/middleeast/israeli-palestinian-conflict-gaza-hamas.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2021/05/12/world/middleeast/israeli-palestinian-conflict-gaza-hamas.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/hamas-chief-ismail-haniyeh-killed-iran-hamas-says-statement-2024-07-31/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/hamas-chief-ismail-haniyeh-killed-iran-hamas-says-statement-2024-07-31/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/2024/07/30/world/middleeast/us-iran-iraq.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/07/30/world/middleeast/us-iran-iraq.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://news.un.org/en/story/2023/12/1145182') - ReadTimeout('The read operation timed out')
Error while requesting URL('https://www.washingtonpost.com/world/middle-east/') - ReadTimeout('The read operation timed out')
Error while requesting URL('https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/02/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/07/30/world/middleeast/us-iran-iraq.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/07/30/world/middleeast/us-iran-iraq.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/08/03/world/israel-hamas-iran-hezbollah-gaza'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.wsj.com/world/middle-east/iran-warns-pilots-to-avoid-airspace-as-middle-east-awaits-attack-0682f78e') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.wsj.com/world/middle-east/iran-warns-pilots-to-avoid-airspace-as-middle-east-awaits-attack-0682f78e'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.usnews.com/news/world/articles/2024-07-31/us-expresses-concern-over-escalating-middle-east-conflict-risk') - ReadTimeout('The read operation timed out')
Error while requesting URL('https://www.nytimes.com/2024/08/05/world/middleeast/iran-israel-attack-strikes-why.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/05/world/middleeast/iran-israel-attack-strikes-why.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/08/01/world/middleeast/middle-east-israel-iran-hezbollah.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/08/01/world/middleeast/middle-east-israel-iran-hezbollah.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.tandfonline.com/doi/full/10.1080/19448953.2021.1888251') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.tandfonline.com/doi/full/10.1080/19448953.2021.1888251'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.fpri.org/article/2024/03/the-realignment-of-the-middle-east/') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.fpri.org/article/2024/03/the-realignment-of-the-middle-east/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.nytimes.com/2024/07/31/world/middleeast/iran-lebanon-israel-war-assassination.html') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/2024/07/31/world/middleeast/iran-lebanon-israel-war-assassination.html'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/killing-hamas-leader-intended-prolong-gaza-conflict-abbas-tells-ria-news-agency-2024-08-05/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/killing-hamas-leader-intended-prolong-gaza-conflict-abbas-tells-ria-news-agency-2024-08-05/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.reuters.com/world/middle-east/hamas-chief-ismail-haniyeh-killed-iran-hamas-says-statement-2024-07-31/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/hamas-chief-ismail-haniyeh-killed-iran-hamas-says-statement-2024-07-31/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.nytimes.com/live/2024/02/02/world/us-iran-strikes-middle-east-news') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.nytimes.com/live/2024/02/02/world/us-iran-strikes-middle-east-news'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.reuters.com/world/middle-east/israel-palestinian-dispute-hinges-statehood-land-jerusalem-refugees-2023-10-10/') - HTTPStatusError("Client error '401 HTTP Forbidden' for url 'https://www.reuters.com/world/middle-east/israel-palestinian-dispute-hinges-statehood-land-jerusalem-refugees-2023-10-10/'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401")
Error while requesting URL('https://www.chathamhouse.org/2024/05/beware-middle-easts-forgotten-wars') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.chathamhouse.org/2024/05/beware-middle-easts-forgotten-wars'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
Error while requesting URL('https://www.wsj.com/world/middle-east/a-guide-to-the-middle-easts-growing-conflicts-in-six-maps-2ea0c0da') - HTTPStatusError("Client error '403 Forbidden' for url 'https://www.wsj.com/world/middle-east/a-guide-to-the-middle-easts-growing-conflicts-in-six-maps-2ea0c0da'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403")
_run_conversation
conv = future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/storm_wiki/modules/knowledge_curation.py", line 259, in run_conv
return conv_simulator(
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/dspy/primitives/program.py", line 26, in __call__
return self.forward(*args, **kwargs)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/storm_wiki/modules/knowledge_curation.py", line 55, in forward
expert_output = self.topic_expert(topic=topic, question=user_utterance, ground_truth_url=ground_truth_url)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/dspy/primitives/program.py", line 26, in __call__
return self.forward(*args, **kwargs)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/storm_wiki/modules/knowledge_curation.py", line 174, in forward
searched_results: List[StormInformation] = self.retriever.retrieve(list(set(queries)),
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/storm_wiki/modules/retriever.py", line 244, in retrieve
retrieved_data_list = self._rm(query_or_queries=query, exclude_urls=exclude_urls)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/dspy/retrieve/retrieve.py", line 30, in __call__
return self.forward(*args, **kwargs)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/rm.py", line 158, in forward
valid_url_to_snippets = self.webpage_helper.urls_to_snippets(list(url_to_results.keys()))
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/utils.py", line 405, in urls_to_snippets
articles = self.urls_to_articles(urls)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/knowledge_storm/utils.py", line 393, in urls_to_articles
article_text = extract(
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/trafilatura/core.py", line 322, in extract
options = Extractor(
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/trafilatura/settings.py", line 86, in __init__
self._set_format(output_format)
File "/home/bc/Projects/ODS/stormv2/venvStormv2/lib/python3.10/site-packages/trafilatura/settings.py", line 112, in _set_format
raise AttributeError(f"Cannot set format, must be one of: {', '.join(sorted(SUPPORTED_FORMATS))}")
AttributeError: Cannot set format, must be one of: csv, html, json, markdown, python, txt, xml, xmltei
|
closed
|
2024-08-06T14:34:07Z
|
2024-09-23T06:02:06Z
|
https://github.com/stanford-oval/storm/issues/133
|
[] |
MyraBaba
| 3 |
babysor/MockingBird
|
deep-learning
| 40 |
如何使用训练好的数据集呢
|
如题~
我将百度云下载好的训练结果放在E:\Voice\trainmodel,执行python demo_toolbox.py -d E:\Voice\trainmodelc好像并不能成功运行
|
closed
|
2021-08-23T08:01:25Z
|
2021-08-31T14:14:07Z
|
https://github.com/babysor/MockingBird/issues/40
|
[] |
zhangykevin
| 1 |
strawberry-graphql/strawberry
|
django
| 2,946 |
Pedantic v2.0.2 is not supported
|
Starting from `pydantic==2.0.1`, there is no support for `strawberry.experimental.pydantic`.
## Traceback
```sh
...
File "<path>/gql.py", line 15, in <module>
@strawberry.experimental.pydantic.type(model=schemas.Location)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'strawberry.experimental' has no attribute 'pydantic'
```
## System Information
- Strawberry version: `0.194.4`
|
closed
|
2023-07-13T06:55:17Z
|
2025-03-20T15:56:18Z
|
https://github.com/strawberry-graphql/strawberry/issues/2946
|
[
"bug"
] |
dsuhoi
| 1 |
modelscope/modelscope
|
nlp
| 905 |
Hope datasets provides the debug logs viewer interface that causes the viewer to not show up
|
**Describe the feature**
Features description
**Motivation**
A clear and concise description of the motivation of the feature. Ex1. It is inconvenient when [....]. Ex2. There is a recent paper [....], which is very helpful for [....].
**Related resources**
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
**Additional context**
Add any other context or screenshots about the feature request here. If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
明明本地调试都正常打印,一push到云端自动viewer就不工作了,也没法找原因
|
closed
|
2024-07-11T08:03:36Z
|
2024-10-23T05:28:21Z
|
https://github.com/modelscope/modelscope/issues/905
|
[] |
monetjoe
| 4 |
plotly/dash-table
|
dash
| 744 |
Header width with fixed headers expand when filtering even when columns have fixed width
|

```python
import dash
from dash.dependencies import Input, Output
import dash_table
import dash_html_components as html
import datetime
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder2007.csv')
df['Mock Date'] = [
datetime.datetime(2020, 1, 1, 0, 0, 0) + i * datetime.timedelta(hours=13)
for i in range(len(df))
]
app = dash.Dash(__name__)
def table_type(df_column):
# Note - this only works with Pandas >= 1.0.0
if isinstance(df_column.dtype, pd.DatetimeTZDtype):
return 'datetime',
elif (isinstance(df_column.dtype, pd.StringDtype) or
isinstance(df_column.dtype, pd.BooleanDtype) or
isinstance(df_column.dtype, pd.CategoricalDtype) or
isinstance(df_column.dtype, pd.PeriodDtype)):
return 'text'
elif (isinstance(df_column.dtype, pd.SparseDtype) or
isinstance(df_column.dtype, pd.IntervalDtype) or
isinstance(df_column.dtype, pd.Int8Dtype) or
isinstance(df_column.dtype, pd.Int16Dtype) or
isinstance(df_column.dtype, pd.Int32Dtype) or
isinstance(df_column.dtype, pd.Int64Dtype)):
return 'numeric'
else:
return 'any'
app.layout = dash_table.DataTable(
columns=[
{'name': i, 'id': i, 'type': table_type(df[i])} for i in df.columns
],
data=df.to_dict('records'),
filter_action='native',
fixed_rows={'headers': True},
style_table={'height': 400},
style_data={
'minWidth': '{}%'.format(100 / len(df.columns)),
'width': '{}%'.format(100 / len(df.columns)),
'maxWidth': '{}%'.format(100 / len(df.columns))
}
)
if __name__ == '__main__':
app.run_server(debug=True)
```
fyi @Marc-Andre-Rivet for when you are in the neighborhood
|
open
|
2020-04-14T23:28:27Z
|
2020-04-14T23:28:41Z
|
https://github.com/plotly/dash-table/issues/744
|
[
"bug"
] |
chriddyp
| 1 |
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 799 |
关于pretrain阶段的一些疑问
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
其他问题
### 基础模型
LLaMA-13B
### 操作系统
Linux
### 详细描述问题
想咨询一下关于扩充词表进行pretrain的问题。
我阅读了您的QA以及一些issue,发现两种预训练方式。
一是早期的,stage1是针对扩充的词表,先训练resize之后的embedding,再用lora进行embedding、lm head和transformers的pretrain
二是后期的,直接用lora进行embedding、lm head和transformers的pretrain
我的问题是:
在早期的一中,stage1中为什么只训练embedding,而不把对应的lm head也一起训了?
在后期的二中,针对新词表扩充的embedding和lm head的参数是随机初始化的吗?然后直接用lora去pretrain整个模型吗?这样会不会带来一个问题,也就是初始模型新增词表部分是随机参数,但是对应的lora模型的参数是有序的,融合后这部分的效果会不太好。
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
### 运行日志或截图
```
# 请在此处粘贴运行日志
```
|
closed
|
2023-07-31T07:25:55Z
|
2023-08-10T22:02:15Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/799
|
[
"stale"
] |
chenhk-chn
| 5 |
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,611 |
AssertionError: .\datasets\wys_cyclegan_dataset\train is not a valid directory
|
Is it because it runs under Windows?
|
open
|
2023-11-08T06:28:44Z
|
2023-11-08T06:28:44Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1611
|
[] |
liboya888
| 0 |
fa0311/TwitterInternalAPIDocument
|
graphql
| 616 |
Is there any package that can parse Twitter's return JSON?
|
Good work, but I'm seeking help: Is there any package that can parse Twitter's return JSON?
|
open
|
2024-08-23T16:48:14Z
|
2024-08-26T15:38:08Z
|
https://github.com/fa0311/TwitterInternalAPIDocument/issues/616
|
[] |
bwnjnOEI
| 1 |
scikit-hep/awkward
|
numpy
| 2,374 |
`ak.cartesian` followed by indexing operation over-touches data
|
### Version of Awkward Array
main + #2370 & #2373
### Description and code to reproduce
```python3
import json
import awkward as ak
def delta_r2(a, b):
return (a.eta - b.eta) ** 2 + (a.phi - b.phi) ** 2
fromjson = {
"class": "RecordArray",
"fields": ["muon", "jet"],
"contents": [
{
"class": "ListOffsetArray",
"offsets": "i64",
"content": {
"class": "RecordArray",
"fields": ["pt", "eta", "phi", "crossref"],
"contents": [
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "muon_pt!",
},
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "muon_eta!",
},
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "muon_phi!",
},
{
"class": "ListOffsetArray",
"offsets": "i64",
"content": {
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "muon_crossref_content!",
},
"parameters": {},
"form_key": "muon_crossref_index!",
},
],
"parameters": {},
"form_key": "muon_record!",
},
"parameters": {},
"form_key": "muon_list!",
},
{
"class": "ListOffsetArray",
"offsets": "i64",
"content": {
"class": "RecordArray",
"fields": ["pt", "eta", "phi", "crossref", "thing1"],
"contents": [
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "jet_pt!",
},
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "jet_eta!",
},
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "jet_phi!",
},
{
"class": "ListOffsetArray",
"offsets": "i64",
"content": {
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "jet_crossref_content!",
},
"parameters": {},
"form_key": "jet_crossref_index!",
},
{
"class": "NumpyArray",
"primitive": "int64",
"inner_shape": [],
"parameters": {},
"form_key": "jet_thing1!",
},
],
"parameters": {},
"form_key": "jet_record!",
},
"parameters": {},
"form_key": "jet_list!",
},
],
"parameters": {},
"form_key": "outer!",
}
form = ak.forms.from_json(json.dumps(fromjson))
ttlayout, report = ak._nplikes.typetracer.typetracer_with_report(form)
ttarray = ak.Array(ttlayout)
a = ak.cartesian([ttarray.muon, ttarray.jet], axis=1, nested=True)
print("ab>>>", report.data_touched, "\n")
mval = delta_r2(a["0"], a["1"])
print("dr>>>>", report.data_touched, "\n")
mmin = ak.argmin(mval, axis=2)
print("mmin>>", report.data_touched, "\n")
ak.firsts(a["1"][mmin], axis=2).pt
print("pt>>>>", report.data_touched, "\n")
```
produces:
```
ab>>> ['muon_list!', 'jet_list!']
dr>>>> ['muon_list!', 'jet_list!', 'muon_eta!', 'jet_eta!', 'muon_phi!', 'jet_phi!']
mmin>> ['muon_list!', 'jet_list!', 'muon_eta!', 'jet_eta!', 'muon_phi!', 'jet_phi!']
pt>>>> ['muon_list!', 'jet_list!', 'muon_eta!', 'jet_eta!', 'muon_phi!', 'jet_phi!', 'jet_pt!', 'jet_crossref_index!', 'jet_crossref_content!', 'jet_thing1!']
```
Which is touching everything in the "jet" object in this case. It should only touch "jet_pt!" in addition to what is required for the matching criteria (delta_r in this case, could be anything).
Follow up to #2372
|
closed
|
2023-04-07T18:30:13Z
|
2023-04-08T16:51:38Z
|
https://github.com/scikit-hep/awkward/issues/2374
|
[
"bug (unverified)"
] |
lgray
| 0 |
plotly/dash-table
|
dash
| 563 |
Text Comparison Operators
|
Typing "lens" as a filter for a text column gets interpreted as "le ns", that is "<= ns".
There is a workaround of course: typing "= lens" or "eq lens".
I suppose there's some value in being able to type "le something" instead of "<= something", but this should _only_ happen if the "le" is followed by a space.
I'd still prefer to be able to completely disable the text comparison operators. For example, French speakers would be surprised by the results of searching for "le something" compared to "Le Something" - even if their table provided case-insensitive search.
|
closed
|
2019-09-03T07:39:15Z
|
2019-10-08T13:22:10Z
|
https://github.com/plotly/dash-table/issues/563
|
[
"dash-type-bug",
"size: 1"
] |
orenbenkiki
| 1 |
hankcs/HanLP
|
nlp
| 1,369 |
pyhanlp配置问题:HANLP_JAR_PATH已配置,报错不是jar文件
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.6
我使用的版本是:1.7.6
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
pyhanlp已通过pip install pyhanlp安装
已从官网上下载了data文件和hanlp-1.7.6-release.zip文件,并解压至了static文件夹下
HANLP_JAR_PATH和HANLP_ROOT_PATH已配置为static
运行from pyhanlp import *,报错ValueError: 配置错误: HANLP_JAR_PATH=D:\anaconda\Lib\site-packages\pyhanlp\static 不是jar文件
|
closed
|
2020-01-01T07:29:25Z
|
2020-01-01T10:43:39Z
|
https://github.com/hankcs/HanLP/issues/1369
|
[
"ignored"
] |
InfiYond
| 2 |
TheKevJames/coveralls-python
|
pytest
| 139 |
Look into pytest versioning.
|
Why is it pinned as `pytest>=2.7.3,<2.8`? If we can unpin this, we can use [pytest-runner](http://doc.pytest.org/en/latest/goodpractices.html#integrating-with-setuptools-python-setup-py-test-pytest-runner).
|
closed
|
2017-02-14T07:36:09Z
|
2017-02-22T18:00:02Z
|
https://github.com/TheKevJames/coveralls-python/issues/139
|
[] |
TheKevJames
| 1 |
robotframework/robotframework
|
automation
| 4,530 |
Unable to generate log.html /output-xunit.xml
|
After i updated to robotframework 6.0/6.0.1, the log.html /output-xunit.xml files are not generated
|
closed
|
2022-11-08T05:39:08Z
|
2022-11-13T00:42:30Z
|
https://github.com/robotframework/robotframework/issues/4530
|
[] |
sasasa42
| 1 |
nalepae/pandarallel
|
pandas
| 271 |
Handle virtual cores more explicitly
|
**Please write here what feature `pandarallel` is missing:**
There is some inconsistency regarding whether `nb_workers` refers to pyhsical cores or logical cores.
* The documentation does not specify explicitly whether `pandarallel.initialize(nb_workers=...)` is physical or logical cores
* The default value used if `nb_workers` is not passed is the number of physical cores ([code](https://github.com/nalepae/pandarallel/blob/261a652cddb219ac353ff803e81646c08b72fc6f/pandarallel/core.py#L36))
* It seems however, that the value passed to `nb_workers` is actually interpreted as logical cores
The main problem is that on a machine with virtual cores, pandarallel will by default use only as many virtual cores as there are physical cores, because it _counts_ the physical cores, but _interprets_ the number as logical cores. This might be solvable by simply changing `False`to `True` in the line linked above (but maybe there are downstream complications).
The other improvement would be to mention explicitly on the documentation that the value passed to `nbworkers` is _logical_ cores.
|
open
|
2024-05-16T13:24:23Z
|
2024-10-26T13:54:05Z
|
https://github.com/nalepae/pandarallel/issues/271
|
[] |
JnsLns
| 3 |
pallets/flask
|
python
| 5,407 |
Extend Config type
|
At the moment, Pylance, when in strict mode, report an error when trying to use app config.
The return type of `app.config["KEY"]` is `Unknown`
First time contribution to an open source project, i want to try and see if i can tell Flask to take into account changes made to `App.config_class` for type hints
Python 3.12
<!--
Replace this comment with an example of the problem which this feature
would resolve. Is this problem solvable without changes to Flask, such
as by subclassing or using an extension?
-->
|
closed
|
2024-02-08T14:56:01Z
|
2024-02-23T00:05:37Z
|
https://github.com/pallets/flask/issues/5407
|
[] |
Zenthae
| 1 |
keras-team/autokeras
|
tensorflow
| 1,798 |
Bug: AutoModel constructor does not scale well.
|
### Bug Description
AutoModel constructor does not scale well to 5000+ inputs & outputs.
### Bug Reproduction
Code for reproducing the bug:
```
X_train = [] # of size 5000
Y_train = [] # of size 5000
am: AutoModel = ak.AutoModel(inputs=[ak.Input() for _ in range(0, len(X_train))],
outputs=[ak.RegressionHead() for _ in range(0, len(Y_train))])
```
Data used by the code:
### Expected Behavior
<!---
The call just hangs and eventually the CPU will quit the process.
-->
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3.10.6
- autokeras: 1.02
- keras-tuner:
- scikit-learn:
- numpy: 1.23.3
- pandas:
- tensorflow:
### Additional context
Using a server grade CPU.
<img width="441" alt="Screen Shot 2022-11-04 at 3 16 50 PM" src="https://user-images.githubusercontent.com/8487086/200057617-a8d0bb03-b0f0-4b72-a137-759a98af2fe0.png">
|
open
|
2022-11-04T19:20:46Z
|
2022-12-05T18:25:11Z
|
https://github.com/keras-team/autokeras/issues/1798
|
[
"bug report"
] |
michaelcordero
| 1 |
flairNLP/fundus
|
web-scraping
| 178 |
Scraping "Occupy Democrats" over Sitemap
|
I've encountered a problem with the sitemap of Occupy Democrats.
They support a [sitemap](https://occupydemocrats.com/sitemap.xml) but a very large portion of sub-sitemaps, e.g. [sitemap-tax-post_tag-1227.xml](https://occupydemocrats.com/sitemap-tax-post_tag-1227.xml) lead to a non-XML list of articles, e.g. https://occupydemocrats.com/tag/zoe-lofgren/.
At the bottom of the sitemap are also standard articles that we should scrape, e.g. https://occupydemocrats.com/sitemap-pt-post-p1-2023-04.xml. Is the "solution" again to only return articles that are fully extracted? I feel like this is a way deeper-rooted problem with our scraper, and the "only return fully extracted articles" is a tiny band-aid requiring intervention from the user.
|
closed
|
2023-04-25T12:17:31Z
|
2023-06-25T18:05:08Z
|
https://github.com/flairNLP/fundus/issues/178
|
[
"help wanted",
"question"
] |
dobbersc
| 5 |
pennersr/django-allauth
|
django
| 4,042 |
Should allauth include HTML email templates in addition to plain text email templates?
|
django-allauth includes many email templates, all in plain text. It does not include any HTML email templates, as mentioned in [the documentation](https://docs.allauth.org/en/latest/common/email.html):
> The project does not contain any HTML email templates out of the box. When you do provide these yourself, note that both the text and HTML versions of the message are sent.
Here are some problems with email messages that are only in plain text:
- The plain text email messages currently do not wrap at 72 or 78 characters, they do not wrap at all. This may violate the expectations of some email clients, although I am not sure. HTML email does not have this problem.
- Email clients may display plain text email messages in a monospace font. Users may prefer a normal variable-width font, which may only be enabled for HTML emails.
- I suspect email clients don't handle right-to-left languages in plain text emails very well, whereas HTML supports right-to-left languages quite well.
I'm happy to contribute a pull request adding HTML email templates to django-allauth. It seems that this is a feature that many users of allauth would want, and it makes sense to have one contributor do the work for the community, rather than duplicating the work.
We would want to make sure that future modifications the HTML email templates are reflected in the plain text templates, and vice-versa. To prevent the text of the HTML templates drifting from the text of the plain text templates, I could write a test comparing the two, using [Django's `strip_tags`](https://docs.djangoproject.com/en/stable/ref/utils/#django.utils.html.strip_tags). The `strip_tags` is reliable enough for use within tests, even if it is not reliable enough to use for untrusted input.
|
closed
|
2024-08-17T13:52:58Z
|
2024-08-22T17:25:13Z
|
https://github.com/pennersr/django-allauth/issues/4042
|
[] |
Flimm
| 5 |
saleor/saleor
|
graphql
| 16,954 |
Add support for "connect" type webhooks in Stripe plugin
|
**Problem:**
Currently, the Saleor Stripe plugin only supports webhooks for the primary account, defaulting the `connect` parameter to `false`. However, Stripe provides an option to set this parameter to `true` to allow webhooks to receive events from connected accounts, which is useful for multi-account setups.
**Proposed Solution:**
In the [stripe_api.py file](https://github.com/saleor/saleor/blob/main/saleor/payment/gateways/stripe/stripe_api.py#L73), update the webhook creation method to include an optional `connect` parameter, enabling users to choose between account-specific or connected account events, as outlined in [Stripe's documentation](https://docs.stripe.com/api/webhook_endpoints/create?lang=python#create_webhook_endpoint-connect).
thanks
|
closed
|
2024-10-31T16:55:31Z
|
2025-03-13T10:58:15Z
|
https://github.com/saleor/saleor/issues/16954
|
[
"plugins",
"stripe"
] |
hersentino
| 3 |
ading2210/poe-api
|
graphql
| 92 |
AttributeError: module 'websocket' has no attribute 'WebSocketApp'.
|
Traceback (most recent call last):
File "G:\AI\poe\poe\poe-api-main\poe-api-main\examples\send_message.py", line 10, in <module>
client = poe.Client('',client_identifier=None)
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\poe.py", line 130, in __init__
self.connect_ws()
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\poe.py", line 336, in connect_ws
self.ws = websocket.WebSocketApp(
AttributeError: module 'websocket' has no attribute 'WebSocketApp'. Did you mean: 'WebSocket'?
|
closed
|
2023-06-02T04:18:20Z
|
2023-06-02T22:54:45Z
|
https://github.com/ading2210/poe-api/issues/92
|
[
"invalid"
] |
40740
| 6 |
keras-team/autokeras
|
tensorflow
| 954 |
文本分类clf.export_model()出现一下报错,autokeras1.0.1,tensorflow-gpu 2.1.0,请问是怎么回事
|
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.core.Dense object at 0x000001A755E62DA0> and <tensorflow.python.keras.layers.core.Dense object at 0x000001A761126940>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.core.Dense object at 0x000001A755E62DA0> and <tensorflow.python.keras.layers.core.Dense object at 0x000001A761126940>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.core.Dense object at 0x000001A761126940> and <tensorflow.python.keras.layers.advanced_activations.Softmax object at 0x000001A75F4DD630>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.core.Dense object at 0x000001A761126940> and <tensorflow.python.keras.layers.advanced_activations.Softmax object at 0x000001A75F4DD630>).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-d4c056555dc8> in <module>
1 #第三步保存查看模型层次
----> 2 clf.export_model()
D:\anaconda\envs\ak1.0\lib\site-packages\autokeras\auto_model.py in export_model(self)
373 with trained weights.
374 """
--> 375 return self.tuner.get_best_model()
D:\anaconda\envs\ak1.0\lib\site-packages\autokeras\engine\tuner.py in get_best_model(self)
35
36 def get_best_model(self):
---> 37 model = super().get_best_models()[0]
38 model.load_weights(self.best_model_path)
39 return model
D:\anaconda\envs\ak1.0\lib\site-packages\kerastuner\engine\tuner.py in get_best_models(self, num_models)
229 """
230 # Method only exists in this class for the docstring override.
--> 231 return super(Tuner, self).get_best_models(num_models)
232
233 def _deepcopy_callbacks(self, callbacks):
D:\anaconda\envs\ak1.0\lib\site-packages\kerastuner\engine\base_tuner.py in get_best_models(self, num_models)
236 """
237 best_trials = self.oracle.get_best_trials(num_models)
--> 238 models = [self.load_model(trial) for trial in best_trials]
239 return models
240
D:\anaconda\envs\ak1.0\lib\site-packages\kerastuner\engine\base_tuner.py in <listcomp>(.0)
236 """
237 best_trials = self.oracle.get_best_trials(num_models)
--> 238 models = [self.load_model(trial) for trial in best_trials]
239 return models
240
D:\anaconda\envs\ak1.0\lib\site-packages\kerastuner\engine\tuner.py in load_model(self, trial)
155 with hm_module.maybe_distribute(self.distribution_strategy):
156 model.load_weights(self._get_checkpoint_fname(
--> 157 trial.trial_id, best_epoch))
158 return model
159
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\keras\engine\training.py in load_weights(self, filepath, by_name, skip_mismatch)
232 raise ValueError('Load weights is not yet supported with TPUStrategy '
233 'with steps_per_run greater than 1.')
--> 234 return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
235
236 @trackable.no_automatic_dependency_tracking
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\keras\engine\network.py in load_weights(self, filepath, by_name, skip_mismatch)
1191 save_format = 'h5'
1192 if save_format == 'tf':
-> 1193 status = self._trackable_saver.restore(filepath)
1194 if by_name:
1195 raise NotImplementedError(
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py in restore(self, save_path)
1281 graph_view=self._graph_view)
1282 base.CheckpointPosition(
-> 1283 checkpoint=checkpoint, proto_id=0).restore(self._graph_view.root)
1284 load_status = CheckpointLoadStatus(
1285 checkpoint,
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\tracking\base.py in restore(self, trackable)
207 # This object's correspondence with a checkpointed object is new, so
208 # process deferred restorations for it and its dependencies.
--> 209 restore_ops = trackable._restore_from_checkpoint_position(self) # pylint: disable=protected-access
210 if restore_ops:
211 self._checkpoint.new_restore_ops(restore_ops)
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\tracking\base.py in _restore_from_checkpoint_position(self, checkpoint_position)
906 restore_ops.extend(
907 current_position.checkpoint.restore_saveables(
--> 908 tensor_saveables, python_saveables))
909 return restore_ops
910
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py in restore_saveables(self, tensor_saveables, python_saveables)
287 "expecting %s") % (tensor_saveables.keys(), validated_names))
288 new_restore_ops = functional_saver.MultiDeviceSaver(
--> 289 validated_saveables).restore(self.save_path_tensor)
290 if not context.executing_eagerly():
291 for name, restore_op in sorted(new_restore_ops.items()):
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py in restore(self, file_prefix)
253 for device, saver in sorted(self._single_device_savers.items()):
254 with ops.device(device):
--> 255 restore_ops.update(saver.restore(file_prefix))
256 return restore_ops
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py in restore(self, file_prefix)
100 structured_restored_tensors):
101 restore_ops[saveable.name] = saveable.restore(
--> 102 restored_tensors, restored_shapes=None)
103 return restore_ops
104
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\training\saving\saveable_object_util.py in restore(self, restored_tensors, restored_shapes)
114 restored_tensor = array_ops.identity(restored_tensor)
115 return resource_variable_ops.shape_safe_assign_variable_handle(
--> 116 self.handle_op, self._var_shape, restored_tensor)
117
118
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py in shape_safe_assign_variable_handle(handle, shape, value, name)
295 with _handle_graph(handle):
296 value_tensor = ops.convert_to_tensor(value)
--> 297 shape.assert_is_compatible_with(value_tensor.shape)
298 return gen_resource_variable_ops.assign_variable_op(handle,
299 value_tensor,
D:\anaconda\envs\ak1.0\lib\site-packages\tensorflow_core\python\framework\tensor_shape.py in assert_is_compatible_with(self, other)
1108 """
1109 if not self.is_compatible_with(other):
-> 1110 raise ValueError("Shapes %s and %s are incompatible" % (self, other))
1111
1112 def most_specific_compatible_shape(self, other):
ValueError: Shapes (20000, 32) and (32, 32) are incompatible
|
closed
|
2020-02-10T04:26:11Z
|
2020-04-19T03:53:13Z
|
https://github.com/keras-team/autokeras/issues/954
|
[
"bug report",
"wontfix"
] |
fucker007
| 5 |
tensorpack/tensorpack
|
tensorflow
| 871 |
StageOp and Unstage don't have a control dependence in StagingInput
|
I foud that StageOp and Unstage don't have a control dependence in tesorboard
There is a NoOp depen on both of them
And I just found the code as below:
```
def _before_run(self, ctx):
# This has to happen once, right before the first iteration.
# doing it in `before_train` may not work because QueueInput happens in before_train.
if not self._initialized:
self._initialized = True
self._prefill()
# Only step the stagingarea when the input is evaluated in this sess.run
fetches = ctx.original_args.fetches
if dependency_of_fetches(fetches, self._check_dependency_op):
return self.fetches
```
that means that Unstage don't have to run before Stage except first step.
So,StageOp may run before Unstage in the following step, and there would have 2 batch of data in StagingArea ?
am I right?
|
closed
|
2018-08-23T03:01:26Z
|
2018-09-05T01:40:49Z
|
https://github.com/tensorpack/tensorpack/issues/871
|
[] |
yogurfrul
| 5 |
awesto/django-shop
|
django
| 496 |
Invalid template name in 'extends' tag
|
./manage.py compilescss give:
Invalid template name in 'extends' tag:
Error parsing template /Users/malt/Env/py3.5/lib/python3.5/site-packages/cms/templates/cms/dummy.html: Invalid template name in 'extends' tag: ''. Got this from the 'template' variable.
Error parsing template /Users/malt/Env/py3.5/lib/python3.5/site-packages/menus/templates/menu/dummy.html: Invalid template name in 'extends' tag: ''. Got this from the 'template' variable.
|
closed
|
2017-01-05T10:12:02Z
|
2017-01-05T10:32:48Z
|
https://github.com/awesto/django-shop/issues/496
|
[] |
maltitco
| 2 |
seleniumbase/SeleniumBase
|
web-scraping
| 2,608 |
Headless Not Passing Captchas in Some Cases
|
For example, with seleniumbase https://www.marketwatch.com/investing/stock/GOOG headless uc mode doesn't get past I'm thinking this recaptcha script: `<script data-cfasync="false" src="https://interstitial.captcha-delivery.com/i.js">`
It was working yesterday though. Weird. Any ideas?
```
display = Display(visible=0, size=(1920, 1200))
display.start()
browser = Driver(uc=True, headless=True)
browser.get(url)
```
|
closed
|
2024-03-15T20:01:17Z
|
2024-03-16T02:06:33Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2608
|
[
"duplicate",
"workaround exists",
"UC Mode / CDP Mode"
] |
own3mall
| 4 |
tox-dev/tox
|
automation
| 3,394 |
Warn or error on invalid extras
|
If an extra is misspelled or missing, tox should give an error or warning. For example when running `tox -e badextras` with
```
[testenv:badextras]
extras = missing
commands = python --version
```
I came across this when I was debugging a project with an env like
```
[testenv:docs]
...
extras = build_docs
```
where the project.optional-dependencies were not getting installed (the underscore is problematic https://github.com/tox-dev/tox/issues/2655). A warning or error would have been very helpful figuring out what was happening.
|
closed
|
2024-10-07T15:21:28Z
|
2025-02-15T17:56:14Z
|
https://github.com/tox-dev/tox/issues/3394
|
[
"help:wanted",
"enhancement"
] |
eaubin
| 2 |
yunjey/pytorch-tutorial
|
pytorch
| 172 |
[image captioning] issues in training phase
|
Hi,It's a nice work and very helpful for beginners.
there is a issue when I write my own code according to your code in image captioning model.
in the example of training phase,you said that for a image description "**Giraffes standing next to each other**" , the source sequence is a list containing **['start', 'Giraffes', 'standing', 'next', 'to', 'each', 'other']** ,but target sequence should be **['Giraffes', 'standing', 'next', 'to', 'each', 'other', 'end']**
when we feed the word **start** to the decoder ,it is expected to output the word **'Giraffes'** ,and in next time step when we feed the word **'Giraffes'** to the decoder ,it will output **'standing‘**.
But what makes me confused is that in **dataloader.py** you padded the caption as **start Giraffes standing next to each other end**. And in **train.py** you feed the padded caption to the decoder to get the output as well as used the padded caption as ground truth to calculate the cross entropy loss. That looks strange because you feed the word **start** to decoder to generate **start** and in next time step feed the word **'Giraffes'** to generate **'Giraffes'** ... .
In my model, the loss becomes 0 . It is simply read words from input sequence and output it as the generated word. what I thought is that the i-th word in input sequence is i-1 th word in output sequence. But I'm not sure if there is some trick you did in other place to change the input sequence and output sequence.
I would be very thankful for any kindly reply.
|
open
|
2019-04-08T15:58:38Z
|
2020-09-29T09:52:15Z
|
https://github.com/yunjey/pytorch-tutorial/issues/172
|
[] |
Lzhushuai
| 1 |
graphistry/pygraphistry
|
jupyter
| 165 |
[DOCS] Spark
|
Refresh for modern practices -- code style, arrow
|
open
|
2020-08-12T02:54:55Z
|
2020-08-12T02:54:55Z
|
https://github.com/graphistry/pygraphistry/issues/165
|
[
"help wanted",
"docs",
"good-first-issue"
] |
lmeyerov
| 0 |
ansible/awx
|
django
| 15,214 |
Tech Preview UI does not present Source Control credentials to project
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx-operator/issues) for duplicates.
- [X] I understand that the AWX Operator is open source software provided for free and that I might not receive a timely response.
### Bug Summary
When using the Tech Preview UI, you are unable to reference any of the credential types of "Source Control" when adding / updating a Project. Switching back to the legacy UI resolves the issue.
### AWX Operator version
2.16.1
### AWX version
24.3.1
### Kubernetes platform
kubernetes
### Kubernetes/Platform version
k3s
### Modifications
no
### Steps to reproduce
Add a Source Control credential, doesn't matter if using username/password combo or SSH private key.
Add a new Project, attempt to reference the credential
<img width="1728" alt="image" src="https://github.com/ansible/awx-operator/assets/76797306/14ad12ef-9804-4a0d-9470-ed3a36e3b1cb">
### Expected results
Source Control credential should be presented
### Actual results
No credentials are presented, despite returning the source control credential in the payload
<img width="1728" alt="image" src="https://github.com/ansible/awx-operator/assets/76797306/e4c3f9fb-bc66-464b-8c24-0a33908521c0">
API call made by the frontend: `http://172.16.0.202:30080/api/v2/credential_types/?kind=scm` and `http://172.16.0.202:30080/api/v2/credentials/?credential_type=2&order_by=name&page=1&page_size=10`
### Additional information
Switching back to the legacy UI resolves the issue
<img width="1728" alt="image" src="https://github.com/ansible/awx-operator/assets/76797306/fd2433eb-18b7-4346-8269-998d09a0ecb7">
API call made by the legacy UI: `http://172.16.0.202:30080/api/v2/credentials/?credential_type=2&order_by=name&page=1&page_size=5`
### Operator Logs
_No response_
|
open
|
2024-05-09T10:41:14Z
|
2025-02-13T22:19:48Z
|
https://github.com/ansible/awx/issues/15214
|
[
"type:bug",
"needs_triage",
"community"
] |
cdot65
| 1 |
waditu/tushare
|
pandas
| 1,708 |
[BUG] 服务器错误(数据库异常)
|
{'api_name': 'bak_basic', 'token': '591e6891f9287935f45fc712bcf62335a81cd6829ce76c21c0fdf7b2', 'params': {'trade_date': '20221107'}, 'fields': ''}
{'code': 50101, 'msg': '服务器错误(数据库异常),请稍后再试!期待您能把错误反馈给我们,谢谢!', 'message': "(pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT `TRADE_DATE`,`TS_CODE`,`NAME`,`INDUSTRY`,`AREA`,`PE`,`FLOAT_SHARE`,`TOTAL_SHARE`,`TOTAL_ASSETS`,`LIQUID_ASSETS`,`FIXED_ASSETS`,`RESERVED`,`RESERVED_PERSHARE`,`EPS`,`BVPS`,`PB`,`LIST_DATE`,`UNDP`,`PER_UNDP`,`REV_YOY`,`PROFIT_YOY`,`GPR`,`NPR`,`HOLDER_NUM` FROM `TS_STK_TDX_BASIC` WHERE `TRADE_DATE` = '20221107' ORDER BY `TRADE_DATE` desc LIMIT 7000]\n(Background on this error at: http://sqlalche.me/e/13/e3q8)", 'data': None}
|
open
|
2023-06-24T05:08:02Z
|
2023-06-24T05:08:02Z
|
https://github.com/waditu/tushare/issues/1708
|
[] |
NeoWang9999
| 0 |
FlareSolverr/FlareSolverr
|
api
| 1,444 |
[yggcookie] (testing) Exception (yggcookie): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 100.0 seconds.: The operation was canceled.
|
### Have you checked our README?
- [x] I have checked the README
### Have you followed our Troubleshooting?
- [x] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [x] I have checked older issues, open and closed
### Have you checked the discussions?
- [x] I have read the Discussions
### Have you ACTUALLY checked all these?
YES
### Environment
```markdown
- FlareSolverr version:
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes/no]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [yes/no]
- Are you using a Proxy: [yes/no]
- Are you using Captcha Solver: [yes/no]
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
An error occurred while testing this indexer
Exception (yggcookie): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 100.0 seconds.: The operation was canceled.
[Click here to open an issue on GitHub for FlareSolverr.](https://github.com/FlareSolverr/FlareSolverr/issues/new?template=bug_report.yml&title=[yggcookie]%20(testing)%20Exception%20(yggcookie)%3A%20FlareSolverr%20was%20unable%20to%20process%20the%20request%2C%20please%20check%20FlareSolverr%20logs.%20Message%3A%20Error%3A%20Error%20solving%20the%20challenge.%20Timeout%20after%20100.0%20seconds.%3A%20The%20operation%20was%20canceled.)
### Logged Error Messages
```text
An error occurred while testing this indexer
Exception (yggcookie): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 100.0 seconds.: The operation was canceled.
Click here to open an issue on GitHub for FlareSolverr.
```
### Screenshots
_No response_
|
closed
|
2025-02-07T19:03:28Z
|
2025-02-08T01:44:21Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1444
|
[
"duplicate"
] |
Jamalouw
| 0 |
slackapi/bolt-python
|
fastapi
| 279 |
Add a default attribute for selected_options for ViewStateValue
|
Having a default attribute for selected_options would allow consistency with the other attributes available. As it is now, if a ViewStateValue object is created with an empty list for selected_options (ex. checkbox) no attribute will be created, leading to an AttributeError when accessed.
### Reproducible in:
#### The `slack_bolt` version
`slack-bolt==1.4.4`
#### Python runtime version
`Python 3.8.3`
#### OS info
```
ProductName: Mac OS X
ProductVersion: 10.15.7
BuildVersion: 19H524
Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
```
>>> d = {"selected_options": []}
>>> v = ViewStateValue(**d)
>>> v
<slack_sdk.ViewStateValue>
>>> v.to_dict()
{}
>>> v.selected_options
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'ViewStateValue' object has no attribute 'selected_options'
```
### Expected result:
It would be nice to have a default of `None`
### Actual result:
```
>>> d = {"selected_options": []}
>>> v = ViewStateValue(**d)
>>> v
<slack_sdk.ViewStateValue>
>>> v.to_dict()
{}
>>> v.selected_options
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'ViewStateValue' object has no attribute 'selected_options'
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
|
closed
|
2021-04-01T19:29:18Z
|
2021-04-01T23:47:46Z
|
https://github.com/slackapi/bolt-python/issues/279
|
[
"enhancement",
"improvement"
] |
scott-shields-github
| 1 |
sanic-org/sanic
|
asyncio
| 2,733 |
ImportError: cannot import name 'CLOSED' from 'websockets.connection'
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
We're using Sanic `21.12.2` at rasa and notice this bug whenever rasa tries to spin a sanic server,
```
File "/Users/zi/Work/Rasa/venv-rasa-oss-3-10-2/lib/python3.10/site-packages/sanic/mixins/startup.py", line 57, in <module>
from sanic.server.protocols.websocket_protocol import WebSocketProtocol
File "/Users/zi/Work/Rasa/venv-rasa-oss-3-10-2/lib/python3.10/site-packages/sanic/server/protocols/websocket_protocol.py", line 3, in <module>
from websockets.connection import CLOSED, CLOSING, OPEN
ImportError: cannot import name 'CLOSED' from 'websockets.connection' (/Users/zi/Work/Rasa/venv-rasa-oss-3-10-2/lib/python3.10/site-packages/websockets/connection.py)
```
It seems like https://github.com/sanic-org/sanic/pull/2609 addresses it but these changes are not available to the branch for version 21 `21.12LTS`.
### Code snippet
_No response_
### Expected Behavior
I would have expected to not see any error when starting the sanic server. I get expected behaviour when using sanic version `22.12.0` and `23.3.0`
### How do you run Sanic?
Sanic CLI
### Operating System
MacOS
### Sanic Version
Sanic 21.12.2; Routing 0.7.2
### Additional context
_No response_
|
open
|
2023-04-03T16:39:38Z
|
2023-07-04T19:40:09Z
|
https://github.com/sanic-org/sanic/issues/2733
|
[
"bug",
"help wanted",
"beginner"
] |
vcidst
| 3 |
huggingface/peft
|
pytorch
| 1,478 |
Connection to huggingface.co timed out when merging Qwen model
|
Why do I still need to connect to huggingface.co when all my model files are local? Is there a way to skip the connection? Even when I use a proxy, the connection still times out
peft ==0.8.2
```
The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to "AutoModelForCausalLM.from_pretrained".
Try importing flash-attention for faster inference...
Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm
Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [03:07<00:00, 23.45s/it]
Traceback (most recent call last):
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connectionpool.py", line 404, in _make_request
self._validate_conn(conn)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connectionpool.py", line 1058, in _validate_conn
conn.connect()
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connection.py", line 179, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x000002BB7D9BC100>, 'Connection to huggingface.co timed out. (connect timeout=10)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\requests\adapters.py", line 440, in send
resp = conn.urlopen(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen
retries = retries.increment(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /output_qwen_chat/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002BB7D9BC100>, 'Connection to huggingface.co timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "c:\Users\Administrator\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy\__main__.py", line 39, in <module>
cli.main()
File "c:\Users\Administrator\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\Administrator\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "c:\Users\Administrator\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "c:\Users\Administrator\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "c:\Users\Administrator\.vscode\extensions\ms-python.python-2022.16.1\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "d:\xin\text.py", line 151, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\peft\auto.py", line 115, in from_pretrained
tokenizer_exists = file_exists(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\hf_api.py", line 2219, in file_exists
get_hf_file_metadata(url, token=token)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\file_download.py", line 1624, in get_hf_file_metadata
r = _request_wrapper(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\file_download.py", line 402, in _request_wrapper
response = _request_wrapper(
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\file_download.py", line 425, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\requests\sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\requests\sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\huggingface_hub\utils\_http.py", line 63, in send
return super().send(request, *args, **kwargs)
File "E:\Program\Anaconda\envs\pytorch-gpu\lib\site-packages\requests\adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /output_qwen_chat/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002BB7D9BC100>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: e3217542-52b2-43bc-bd8d-411412c5d92e)')
```
|
closed
|
2024-02-18T06:22:05Z
|
2024-03-28T15:04:59Z
|
https://github.com/huggingface/peft/issues/1478
|
[] |
anyiz
| 3 |
youfou/wxpy
|
api
| 16 |
[建议]建议增加消息尾巴选项
|
## 有时候想注明消息是由机器人发出来的,避免机器人造成不必要的误解,希望大神能增加一个可选参数,定义消息的尾巴。。。。。。。
|
closed
|
2017-03-23T07:26:47Z
|
2017-03-23T07:39:01Z
|
https://github.com/youfou/wxpy/issues/16
|
[] |
Levstyle
| 2 |
matplotlib/matplotlib
|
data-science
| 28,956 |
[Bug]: Incorrect Y limits
|
### Bug summary
I am creating a dummy plot with 4 subplots. I set the y limits as -300 and 300 for each. But when the plot is drawn the y limits are always set to -200 and 200. Similarily, if i set to 150, it sets to 100.
Is this expected behavior and is the reason mentioned in the doc or is this a bug?
### Code for reproduction
```Python
import matplotlib.pyplot as plt
fig, ax = plt.subplots(4, 1, sharex=True)
for i in range(4):
ax[i].set_ylim((-300, 300))
ax[i].set_ylabel(f"axis {i}")
fig.tight_layout()
plt.show()
```
### Actual outcome
<img width="635" alt="Screenshot 2024-10-09 at 2 40 54 PM" src="https://github.com/user-attachments/assets/2deae4a4-65fa-4c09-9ba0-d0a8cb53c8b0">
### Expected outcome
The limits should be -300 and 300 for each subplot. It never sets to 300 though.
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
None
|
closed
|
2024-10-09T06:42:46Z
|
2024-10-10T05:24:59Z
|
https://github.com/matplotlib/matplotlib/issues/28956
|
[
"Community support"
] |
savitha-suresh
| 3 |
randyzwitch/streamlit-folium
|
streamlit
| 84 |
Just a question: extract more values based on user click on Draw plugin.
|
Hi, thanks for this great library. Really helpful!
btw I am a beginner in the web developing and github as well, I am so sorry if there are some stupidities on my questions.
I would like to ask, is it possible to return every value of click event?
At the moment, I am trying to present folium map on Streamlit. I used the st_folium with Draw plugin and then utilized some of its return values. However I would like to go further.
For instance, I want user to be able comment on a certain feature or a drawn polygon. For that I created a Popup page that contains a html from. I succeeded to attach the form into the GeojsonPopup however I have struggled to find a way to return the comment and not found it yet. Do you have any idea how to solve it?
Another example regarding clicking event, I would like to get clicking event when a features e.g., a polygon has finished to be created/edited. Is it possible to return a value on that event?
Thank you for you time.
Best wishes,
Leclab Research Assistance,
|
open
|
2022-09-09T02:26:36Z
|
2024-11-14T19:37:04Z
|
https://github.com/randyzwitch/streamlit-folium/issues/84
|
[
"enhancement"
] |
leclab0
| 1 |
kennethreitz/records
|
sqlalchemy
| 30 |
Bug with table headings
|
Database query:
``` Python
rows = db.query("select * from users")
```
if he does not return the headers of the table, the result:

and if you try to export the data, there will be an error

because the table did not return headers, and they are trying to take
``` Python
def dataset(self):
"""A Tablib Dataset representation of the ResultSet."""
# Create a new Tablib Dataset.
data = tablib.Dataset()
# Set the column names as headers on Tablib Dataset.
first = self[0]
data.headers = first._fields
```
|
closed
|
2016-02-10T06:30:02Z
|
2018-04-28T22:58:47Z
|
https://github.com/kennethreitz/records/issues/30
|
[
"invalid"
] |
dorosch
| 3 |
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,390 |
Exception on UNIQUE constraint failed: internaltip.tid, internaltip.progressive
|
### What version of GlobaLeaks are you using?
Hi @evilaliv3
I am seeing this error with v4.10.18
```
Platform:
Host:
Version: 4.10.18
sqlalchemy.exc.IntegrityError Wraps a DB-API IntegrityError.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1245, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: internaltip.tid, internaltip.progressive
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/python/threadpool.py", line 250, in inContext
result = inContext.theWork()
File "/usr/lib/python3/dist-packages/twisted/python/threadpool.py", line 266, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/usr/lib/python3/dist-packages/globaleaks/orm.py", line 178, in _wrap
result = function(session, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/submission.py", line 249, in create_submission
return db_create_submission(session, tid, request, user_session, client_using_tor, client_using_mobile)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/submission.py", line 186, in db_create_submission
session.flush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2479, in flush
self._flush(objects)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2617, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 153, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2577, in _flush
flush_context.execute()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 586, in execute
persistence.save_obj(
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 239, in save_obj
_emit_insert_statements(
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 1136, in _emit_insert_statements
result = cached_connections[connection].execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 982, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1095, in _execute_clauseelement
ret = self._execute_context(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1249, in _execute_context
self._handle_dbapi_exception(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1245, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: internaltip.tid, internaltip.progressive
[SQL: INSERT INTO internaltip (id, tid, creation_date, update_date, context_id, progressive, tor, mobile, score, expiration_date, enable_two_way_comments, enable_two_way_messages, enable_attachments, enable_whistleblower_identity, important, label, last_access, status, substatus, receipt_hash, crypto_prv_key, crypto_pub_key, crypto_tip_pub_key, crypto_tip_prv_key, crypto_files_pub_key) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: removed]
(Background on this error at: http://sqlalche.me/e/gkpj)
```
### What browser(s) are you seeing the problem on?
_No response_
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
email notifications sent to admins with the content above
### Proposed solution
_No response_
|
open
|
2023-03-20T11:14:57Z
|
2023-03-23T10:14:15Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3390
|
[
"T: Bug",
"C: Backend"
] |
aetdr
| 2 |
onnx/onnx
|
pytorch
| 6,602 |
Installing onnxrruntime (GPU) -1.14.1 on Ubuntu 22.04 reports an error
|
# Ask a Question
When I installed ONNXruntime v1.14.1 on Ubuntu, I executed the command "./build. sh -- skipthests -- config Release -- build_sthared_lib -- parallel -- use_cuda -- CUDA_ home/var/local/CUDA-11.1-- CUDNN_home/var/local/CUDA-11.1-- use_tensorrt -- tensort_home/home/ps/rain/YK/task/TensorRT-8.5.3.1" and reported a subprocess CalledProcessError: Command, The currently recommended methods on GitHub, such as exporting CMAKE-ARGS="- DYNNX-USE-PROTOBUF-SHARED_LEBS=ON" and git submodule update -- init -- recursive, still report errors! Looking forward to seeking help!
### Question
<2024-12-30 09:59:00,650 util.run [DEBUG] - Subprocess completed. Return code: 0
2024-12-30 09:59:00,651 build [INFO] - Building targets for Release configuration
2024-12-30 09:59:00,651 util.run [INFO] - Running subprocess in '/home/ps/train/YK/task/onnxruntime'
/usr/local/bin/cmake --build /home/ps/train/YK/task/onnxruntime/build/Linux/Release --config Release -- -j80
[ 1%] Running gen_proto.py on onnx/onnx.in.proto
[ 1%] Built target absl_spinlock_wait
[ 2%] Built target absl_log_severity
[ 2%] Built target onnxruntime_providers_shared
[ 2%] Building CUDA object CMakeFiles/onnxruntime_test_cuda_ops_lib.dir/home/ps/train/YK/task/onnxruntime/onnxruntime/test/shared_lib/cuda_ops.cu.o
[ 2%] Built target absl_exponential_biased
[ 2%] Built target absl_int128
[ 3%] Built target clog
[ 3%] Generating onnxruntime.lds, generated_source.c
[ 3%] Built target flatbuffers
[ 3%] Built target absl_civil_time
[ 3%] Built target custom_op_invalid_library
[ 3%] Built target onnxruntime_mocked_allocator
nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
[ 3%] Built target gtest
gmake[2]: *** [CMakeFiles/onnxruntime_test_cuda_ops_lib.dir/build.make:77:**CMakeFiles/onnxruntime_test_cuda_ops_lib.dir/home/ps/train/YK/task/onnxruntime/onnxruntime/test/shared_lib/cuda_ops.cu.o] error 1**
[ 3%] Building CUDA object CMakeFiles/custom_op_library.dir/home/ps/train/YK/task/onnxruntime/onnxruntime/test/shared_lib/cuda_ops.cu.o
gmake[1]: *** [CMakeFiles/Makefile2:2222:CMakeFiles/onnxruntime_test_cuda_ops_lib.dir/all] error 2
gmake[1]: *** Waiting for unfinished tasks....
nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
gmake[2]: *** [CMakeFiles/custom_op_library.dir/build.make:77:**CMakeFiles/custom_op_library.dir/home/ps/train/YK/task/onnxruntime/onnxruntime/test/shared_lib/cuda_ops.cu.o] error 1**
**gmake[1]: *** [CMakeFiles/Makefile2:2796:CMakeFiles/custom_op_library.dir/all] error 2**
[ 3%] Built target absl_time_zone
[ 4%] Built target absl_raw_logging_internal
[ 6%] Built target nsync_cpp
[ 7%] Built target cpuinfo
[ 8%] Built target re2
Generating symbol file for ['cpu', 'cuda', 'tensorrt']
VERSION:1.14.1
Processing /home/ps/train/YK/task/onnxruntime/build/Linux/Release/_deps/onnx-src/onnx/onnx.in.proto
Writing /home/ps/train/YK/task/onnxruntime/build/Linux/Release/_deps/onnx-build/onnx/onnx-ml.proto
Writing /home/ps/train/YK/task/onnxruntime/build/Linux/Release/_deps/onnx-build/onnx/onnx-ml.proto3
generating /home/ps/train/YK/task/onnxruntime/build/Linux/Release/_deps/onnx-build/onnx/onnx_pb.py
[ 10%] Built target flatc
[ 10%] Built target onnxruntime_generate_def
[ 10%] Running C++ protocol buffer compiler on /home/ps/train/YK/task/onnxruntime/build/Linux/Release/_deps/onnx-build/onnx/onnx-ml.proto
[ 15%] Built target onnxruntime_mlas
[ 15%] Built target gen_onnx_proto
**gmake: *** [Makefile:166:all] error 2**
Traceback (most recent call last):
File "/home/ps/train/YK/task/onnxruntime/tools/ci_build/build.py", line 2737, in <module>
sys.exit(main())
File "/home/ps/train/YK/task/onnxruntime/tools/ci_build/build.py", line 2634, in main
build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
File "/home/ps/train/YK/task/onnxruntime/tools/ci_build/build.py", line 1395, in build_targets
run_subprocess(cmd_args, env=env)
File "/home/ps/train/YK/task/onnxruntime/tools/ci_build/build.py", line 764, in run_subprocess
return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
File "/home/ps/train/YK/task/onnxruntime/tools/python/util/run.py", line 49, in run
completed_process = subprocess.run(
File "/home/ps/anaconda3/envs/mmagic/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '/home/ps/train/YK/task/onnxruntime/build/Linux/Release', '--config', 'Release', '--', '-j80']' returned non-zero exit status 2.>
### Further information
version:
**onnxruntime(GPU): v1.14.1,**
ubuntu 22.04 LTS,
**cuda:11.1,
tensorRT:8.5.3.1,**
GPU: RTX 3090,
cmake: 3.30.6,
python: 3.10.0
protobuf: 3.20.3
|
closed
|
2024-12-30T02:05:10Z
|
2024-12-30T20:39:24Z
|
https://github.com/onnx/onnx/issues/6602
|
[
"question"
] |
yuan-kai-design
| 1 |
saulpw/visidata
|
pandas
| 2,261 |
website has dead links for man pages/quick docs
|
**Small description**
https://www.visidata.org/docs/ and https://www.visidata.org/ both link to https://www.visidata.org/docs/man/ which has an Oops! (404?) page.
**Expected result**
See quick ref guide (2.0?)
**Actual result with screenshot**
> Oops!
> Oops!
> Sorry, we can't find that page!
>
> Take me back to the homepage
If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.
**Steps to reproduce with sample data and a .vd**
NA
First try reproducing without any user configuration by using the flag `-N`.
e.g. `echo "abc" | vd -f txt -N`
Please attach the commandlog (saved with `Ctrl-D`) to show the steps that led to the issue.
See [here](http://visidata.org/docs/save-restore/) for more details.
**Additional context**
Please include the version of VisiData and Python.
NA
|
closed
|
2024-01-19T16:48:11Z
|
2024-01-21T04:50:13Z
|
https://github.com/saulpw/visidata/issues/2261
|
[
"bug",
"fixed"
] |
clach04
| 1 |
davidsandberg/facenet
|
computer-vision
| 1,169 |
The effect of fixed_image_standardization
|
Hello,
Could any one here explain to me that why we need to use fixed_image_standardization? what is the effect of this algorithm to the
image and to the model? Do we have any paper that describe about this?
|
open
|
2020-08-24T08:37:30Z
|
2021-04-16T08:41:00Z
|
https://github.com/davidsandberg/facenet/issues/1169
|
[] |
glmanhtu
| 2 |
kornia/kornia
|
computer-vision
| 2,093 |
Update configs to use `dataclasses`
|
We have some specific configs for some algorithms, will be nice to update them from using `dicts`/`TypedDict` to `dataclasses`.
The idea here is to do it in a way that does not break things, so we should have an interface (to/from) between `dict` and the `dataclasses`.
Example of what we can explore for these methods
```python
>>> from dataclasses import dataclass, asdict
>>> @dataclass
... class A:
... b: int
...
>>> asdict(A(1))
{'b': 1}
>>> A(**asdict(A(1)))
A(b=1)
```
_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2092#discussion_r1049564760_
List of some configs to be replaced:
- [ ] kornia.feature.adalam.core.AdalamConfig
- [ ] kornia.contrib.face_detection.FaceDetector.config #2851
- [ ] kornia.feature.keynet.KeyNet_conf - #2254
- [ ] [kornia.feature.loftr.loftr.default_cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/loftr/loftr.py#L25)
- [ ] kornia.feature.loftr.loftr_module.fine_preprocess.FinePreprocess.config
- [ ] kornia.feature.loftr.loftr_module.transformer.LocalFeatureTransformer.config
- [ ] kornia.feature.loftr.utils.coarse_matching.CoarseMatching.config
- [ ] kornia.feature.loftr.utils.supervision config
- [ ] [kornia.feature.loftr.backbone.resnet_fpn config](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/loftr/backbone/resnet_fpn.py#L50)
- [ ] kornia.feature.matching._get_default_fginn_params
- [x] [kornia.feature.sold2.backbones.SOLD2Net.cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/sold2/backbones.py#L377) - #2880
- [x] [kornia.feature.sold2.sold2.default_cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/sold2/sold2.py#L18)
- [x] [kornia.feature.sold2.sold2_detector.default_cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/sold2/sold2_detector.py#L17)
- [ ] #2901
- [ ] #2908
|
open
|
2022-12-16T12:59:43Z
|
2024-05-16T23:36:50Z
|
https://github.com/kornia/kornia/issues/2093
|
[
"enhancement :rocket:",
"help wanted",
"good first issue",
"code heatlh :pill:"
] |
johnnv1
| 11 |
scikit-hep/awkward
|
numpy
| 3,128 |
Scalar type promotion not working
|
### Version of Awkward Array
2.6.4
### Description and code to reproduce
In the following code
```python
from typing import Annotated
import numpy as np
import awkward as ak
from enum import IntEnum
class ParticleOrigin(IntEnum):
NonDefined: int = 0
SingleElec: int = 1
SingleMuon: int = 2
# works as expected
print(np.arange(10) == ParticleOrigin.SingleElec)
# errors
print(ak.Array(np.arange(10)) == ParticleOrigin.SingleElec)
```
numpy manages to recognize the `IntEnum` is promotable to int64 but awkward fails with the error:
```
Traceback (most recent call last):
File "/Users/ncsmith/src/tmp.py", line 16, in <module>
print(ak.Array(np.arange(10)) == ParticleOrigin.SingleElec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_operators.py", line 53, in func
return ufunc(self, other)
^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/highlevel.py", line 1516, in __array_ufunc__
return ak._connect.numpy.array_ufunc(ufunc, method, inputs, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_connect/numpy.py", line 466, in array_ufunc
out = ak._broadcasting.broadcast_and_apply(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_broadcasting.py", line 968, in broadcast_and_apply
out = apply_step(
^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_broadcasting.py", line 946, in apply_step
return continuation()
^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_broadcasting.py", line 915, in continuation
return broadcast_any_list()
^^^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_broadcasting.py", line 622, in broadcast_any_list
outcontent = apply_step(
^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_broadcasting.py", line 928, in apply_step
result = action(
^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_connect/numpy.py", line 432, in action
result = backend.nplike.apply_ufunc(ufunc, method, input_args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_nplikes/array_module.py", line 208, in apply_ufunc
return self._apply_ufunc_nep_50(ufunc, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ncsmith/src/commonenv/lib/python3.12/site-packages/awkward/_nplikes/array_module.py", line 235, in _apply_ufunc_nep_50
resolved_dtypes = ufunc.resolve_dtypes(arg_dtypes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Provided dtype must be a valid NumPy dtype, int, float, complex, or None.
This error occurred while calling
numpy.equal.__call__(
<Array [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] type='10 * int64'>
<ParticleOrigin.SingleElec: 1>
)
```
cc @kratsg
|
open
|
2024-05-24T13:53:39Z
|
2024-05-24T14:35:26Z
|
https://github.com/scikit-hep/awkward/issues/3128
|
[
"bug (unverified)"
] |
nsmith-
| 2 |
gee-community/geemap
|
streamlit
| 2,186 |
geemap is still relying on pkg_ressource that is deprecated starting from Python 3.10
|
I was testing my code with Python 3.11 in pyGAUL and everything started to crash from Python 3.11. The completely removed the `pkg_resources` module from the standards lib but you are still importing it in `conversion.py`:
https://github.com/gee-community/geemap/blob/fa56084f15b786ba1afad042a20dbe6f113edda2/geemap/conversion.py#L23
|
closed
|
2024-12-09T07:39:14Z
|
2024-12-09T14:33:23Z
|
https://github.com/gee-community/geemap/issues/2186
|
[
"bug"
] |
12rambau
| 0 |
google-research/bert
|
nlp
| 752 |
tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Key: masked_lm_weights. Can't parse serialized Example.
|
Tensorflow:1.12
python3.6
when run run_pretraining.py, i meet the error. And when I set the max_predictions_per_seq=5, no error; but when set the max_predictions_per_seq=10, the error happens.
|
open
|
2019-07-08T15:49:58Z
|
2020-08-30T20:16:25Z
|
https://github.com/google-research/bert/issues/752
|
[] |
yw411
| 2 |
litestar-org/litestar
|
asyncio
| 3,516 |
Bug: LoggingMiddleware breaks static file serving
|
### Description
If you try to add logging middleware without excluded /static route, then you will get the following error
```
Traceback (most recent call last):
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/streaming.py", line 134, in send_body
await self._listen_for_disconnect(cancel_scope=task_group.cancel_scope, receive=receive)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/streaming.py", line 100, in _listen_for_disconnect
await self._listen_for_disconnect(cancel_scope=cancel_scope, receive=receive)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/streaming.py", line 94, in _listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 535, in receive
await self.message_event.wait()
File "/usr/lib/python3.11/asyncio/locks.py", line 213, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f9a8eadd150
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 219, in __call__
| await self.app(scope, receive, send)
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/routes/http.py", line 86, in handle
| await response(scope, receive, send)
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 200, in __call__
| await self.send_body(send=send, receive=receive)
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/file.py", line 187, in send_body
| await super().send_body(send=send, receive=receive)
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/streaming.py", line 132, in send_body
| async with create_task_group() as task_group:
| File "/workdir/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/streaming.py", line 117, in _stream
| await send(stream_event)
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/logging.py", line 226, in send_wrapper
| self.log_response(scope=scope)
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/logging.py", line 136, in log_response
| extracted_data = self.extract_response_data(scope=scope)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/logging.py", line 194, in extract_response_data
| connection_state.log_context.pop(HTTP_RESPONSE_START),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| KeyError: 'http.response.start'
+------------------------------------
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 219, in __call__
await self.app(scope, receive, send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/base.py", line 129, in wrapped_call
await original__call__(self, scope, receive, send) # pyright: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/logging.py", line 112, in __call__
await self.app(scope, receive, send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 233, in __call__
await self.handle_request_exception(
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 263, in handle_request_exception
await response.to_asgi_response(app=None, request=request)(scope=scope, receive=receive, send=send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 194, in __call__
await self.start_response(send=send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 165, in start_response
await send(event)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/logging.py", line 227, in send_wrapper
await send(message)
File "/workdir/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 496, in send
raise RuntimeError(msg % message_type)
RuntimeError: Expected ASGI message 'http.response.body', but got 'http.response.start'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 219, in __call__
await self.app(scope, receive, send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/_asgi/asgi_router.py", line 89, in __call__
await asgi_app(scope, receive, send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 233, in __call__
await self.handle_request_exception(
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 263, in handle_request_exception
await response.to_asgi_response(app=None, request=request)(scope=scope, receive=receive, send=send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 194, in __call__
await self.start_response(send=send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 165, in start_response
await send(event)
File "/workdir/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 496, in send
raise RuntimeError(msg % message_type)
RuntimeError: Expected ASGI message 'http.response.body', but got 'http.response.start'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workdir/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.11/site-packages/litestar/app.py", line 590, in __call__
await self.asgi_handler(scope, receive, self._wrap_send(send=send, scope=scope)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 233, in __call__
await self.handle_request_exception(
File "/workdir/.venv/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 263, in handle_request_exception
await response.to_asgi_response(app=None, request=request)(scope=scope, receive=receive, send=send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 194, in __call__
await self.start_response(send=send)
File "/workdir/.venv/lib/python3.11/site-packages/litestar/response/base.py", line 165, in start_response
await send(event)
File "/workdir/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 496, in send
raise RuntimeError(msg % message_type)
RuntimeError: Expected ASGI message 'http.response.body', but got 'http.response.start'.
```
### URL to code causing the issue
_No response_
### MCVE
```python
from typing import Final
from litestar import Litestar
from litestar.logging import StructLoggingConfig
from litestar.middleware.logging import LoggingMiddlewareConfig
from litestar.openapi import OpenAPIConfig
from litestar.openapi.plugins import SwaggerRenderPlugin
from litestar.static_files import create_static_files_router
logging_config: Final = StructLoggingConfig()
logging_middleware: Final = LoggingMiddlewareConfig(
# exclude=["/static"],
request_log_fields=["path", "method", "content_type", "query", "path_params"],
response_log_fields=["status_code"],
).middleware
openapi_config: Final = OpenAPIConfig(
title="Insourcing API",
version="0.1.0",
render_plugins=[
SwaggerRenderPlugin(
js_url="/static/swagger-ui-bundle.js",
standalone_preset_js_url="/static/swagger-ui-standalone-preset.js",
css_url="/static/swagger-ui.css",
),
],
path="/docs",
)
app = Litestar(
route_handlers=[
create_static_files_router(
path="/static",
directories=["./static/"],
include_in_schema=False,
),
],
openapi_config=openapi_config,
logging_config=logging_config,
middleware=[logging_middleware],
)
```
### Steps to reproduce
```bash
1. Copy+paste code example
2. Create `static` folder near script
3. Download and put this 3 files
- https://cdn.jsdelivr.net/npm/[email protected]/swagger-ui-bundle.js
- https://cdn.jsdelivr.net/npm/[email protected]/swagger-ui.css
- https://cdn.jsdelivr.net/npm/[email protected]/swagger-ui-standalone-preset.js
4. Run with uvicorn - `uvicorn test:app --host 0.0.0.0 --port 8000`
5. Open localhost:8000/docs
6. Got an error
7. Uncomment line 13
8. Run again
9. Get no error
```
### Litestar Version
2.8.2
### Platform
wsl - 2.1.5.0
ubuntu - 23.04
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above)
|
open
|
2024-05-22T09:24:11Z
|
2025-03-20T15:54:43Z
|
https://github.com/litestar-org/litestar/issues/3516
|
[
"Bug :bug:"
] |
wallseat
| 2 |
ivy-llc/ivy
|
pytorch
| 28,587 |
fix `greater_equal` at `tf frontend`
|
closed
|
2024-03-13T16:04:54Z
|
2024-03-14T22:04:34Z
|
https://github.com/ivy-llc/ivy/issues/28587
|
[
"Sub Task"
] |
samthakur587
| 0 |
|
sammchardy/python-binance
|
api
| 1,178 |
futures_aggregate_trades
|
Hi futures_aggregate_trades function is not aggregates the quantities.
|
open
|
2022-04-27T22:02:35Z
|
2022-04-27T22:02:35Z
|
https://github.com/sammchardy/python-binance/issues/1178
|
[] |
Tanerbaysa
| 0 |
vastsa/FileCodeBox
|
fastapi
| 273 |
[ UI问题 ] 黑色模式下一些字直接消失
|
例如我框选出的wget下载 如果我不选中其中几个字 剩下的近乎不可读,另外 相对于白天模式,黑色模式中的过期时间 安全加密等字样的灰色可读性也很差,而且也仅仅只反色了文件详情下方 文件logo的背景色 为什么不能使用不同的深色来区分功能区呢?二维码反白的可读性也是很好的,以我使用美团深色模式为例 小米系统强制给美团适配了深色模式,简单粗暴的对二维码做了反白。我用了三年,核销二维码未出过扫码枪或者摄像头识别不出的情况,如果顾虑这个可以不用操心了


|
closed
|
2025-03-05T19:36:30Z
|
2025-03-09T14:47:22Z
|
https://github.com/vastsa/FileCodeBox/issues/273
|
[] |
Marrrrrrrrry
| 1 |
RomelTorres/alpha_vantage
|
pandas
| 183 |
NESN.SWI + outputsize=full produces error
|
Here's another ticker that produces the "invalid API call" error:
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=NESN.SWI&outputsize=full&apikey=KEY
See also #173 .
|
closed
|
2020-01-25T13:16:41Z
|
2021-07-21T01:18:31Z
|
https://github.com/RomelTorres/alpha_vantage/issues/183
|
[
"av_issue"
] |
ymyke
| 9 |
biolab/orange3
|
pandas
| 6,581 |
Error reading column header including char '#' in Excel .xlsx file
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
If a column header is 'abc #1' in .xlsx file, File widget loads the file and shows the header as '1'.
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
<img width="128" alt="xlsx" src="https://github.com/biolab/orange3/assets/43062047/537943f8-1738-4a10-886b-b3bc97f7537f">
<img width="234" alt="file widget" src="https://github.com/biolab/orange3/assets/43062047/346b2dc4-b021-484e-903b-4e44158d0dd5">
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Win 10
- Orange version: 3.35.0
- How you installed Orange: conda
|
closed
|
2023-09-19T02:47:27Z
|
2023-10-01T14:11:06Z
|
https://github.com/biolab/orange3/issues/6581
|
[
"bug"
] |
neverseek
| 0 |
google-research/bert
|
nlp
| 957 |
How does Google calculate a document embeddings using BERT in its new search?
|
Google has started using BERT in its search engine. I imagine it creates embeddings for the query on the search engine, and then find a kind of similarity measure with the potential candidate websites/pages, finally ranking them in search results.
I am curious how do they create embeddings for the documents (the potential candidate websites/pages) if any? Or am I interpreting it wrong?
|
open
|
2019-12-10T09:31:51Z
|
2020-02-15T17:47:09Z
|
https://github.com/google-research/bert/issues/957
|
[] |
ghost
| 2 |
marcomusy/vedo
|
numpy
| 542 |
Animation: inconsistent camera parameters between the play() function and values obtained from typing 'C' key
|
Hi @marcomusy,
I've got another issue for you with the Animation class. I observed that the camera parameters from directly printing inside the `play` function is different from what I obtained by pressing `C` key with my keyboard.
For example, the following code simply rotates the body 180 degrees along the z-axis. I can print out the camera parameters by adding all the print lines before `self.show` in the `play` function in the `Animation` class:
```Python
if dt > self.eps:
print(self.camera.GetPosition())
print(self.camera.GetFocalPoint())
print(self.camera.GetViewUp())
print(self.camera.GetClippingRange())
print(self.camera.GetDistance())
self.show(interactive=False, resetcam=self.resetcam)
```
When the rotation finishes, I press `C` to get camera parameters, and they are different. The last values printed are
```Python
(-653227.8500875884, -6233719.241304105, 1505.5680089146688)
(-653227.8500875884, -6233719.241304105, 9.46603775024414)
(0.0, 1.0, 0.0)
(1313.4217270188815, 1729.6674490770752)
1496.1019711644246
```
and the values obtained by pressing `C` are
```Python
plt.camera.SetPosition( [-555720.156, -6243167.25, 1485.5] ) \n
plt.camera.SetFocalPoint( [-555720.156, -6243167.25, 9.466] )\n
plt.camera.SetViewUp( [0.0, 1.0, 0.0] )
plt.camera.SetDistance( 1476.034 )
plt.camera.SetClippingRange( [1293.555, 1709.299] )
```
Here is the code I used:
```Python
#!/usr/bin/env python3
import numpy as np
from vedo import TetMesh, show, screenshot, settings, Picture, buildLUT, Box, \
Plotter, Axes
from vedo.applications import Animation
import vtk
import time as tm
from vedo import settings
settings.allowInteraction=True
# Do some settings
settings.useDepthPeeling=False # Useful to show the axes grid
font_name = 'Theemim'
settings.defaultFont = font_name
settings.multiSamples=8
# settings.useParallelProjection = True # avoid perspective parallax
# Create a TetMesh object form the vtk file
ovoid_tet = TetMesh('final_mesh.1.vtk')
host_tet = ovoid_tet.clone()
# This will get rid of the background Earth unit and air unit in the model
# which leaves us with the central part of the model
ovoid_tet.threshold(name='cell_scalars', above=1, below=1)
host_tet.threshold(name='cell_scalars', above=2, below=2)
# Crop the entire mesh using a Box object (which is considered to be a mesh
# object in vedo)
# First build a Box object with its centers and dimensions
cent = [555700, 6243165, 100]
box = Box(pos=cent, size=(4000, 4000, 3000))
# So, we now cut the TetMesh object with a mesh (that Box object)
# host_tet.cutWithMesh(box, wholeCells=True)
host_tet.cutWithMesh(box, wholeCells=False)
# And we need to convert it to a mesh object for later plotting
ovoid = ovoid_tet.tomesh().lineWidth(1).lineColor('w')
host = host_tet.tomesh(fill=False).lineWidth(1).lineColor('w')
# We need to build a look up table for our color bar, and now it supports
# using category names as labels instead of the numerical values
# This was implemented upon my request
lut_table = [
# Value, color, alpha, category
(1.0, 'indianred', 1, 'Ovoid'),
(1.5, 'lightgray', 1, 'Host'),
]
lut = buildLUT(lut_table)
host.cmap(lut, 'cell_scalars', on='cells')
ovoid.cmap(lut, 'cell_scalars', on='cells')
# Set the camera position
plt = Animation(videoFileName=None, showProgressBar=False)
# size = [3940, 2160]
size = [2560, 1600]
plt.size = size
# # Trying to play with the Animation class
# plt.showProgressBar = True
plt.timeResolution = 0.01 # secs
# Fade in the ore body
plt.fadeIn(ovoid, t=0, duration=0.5)
# Rotate the ovoid body
plt.rotate(ovoid, axis='z', angle=180, t=0.5, duration=2)
plt.play()
```
Again, here is the data:
[final_mesh.1.vtk.zip](https://github.com/marcomusy/vedo/files/7572498/final_mesh.1.vtk.zip)
My plan was to move the camera from the endpoint after the rotation to something else. But it turned out I could not do it possibly because of this inconsistency. Any ideas?
|
open
|
2021-11-19T19:24:51Z
|
2021-11-24T13:34:34Z
|
https://github.com/marcomusy/vedo/issues/542
|
[
"bug",
"long-term"
] |
XushanLu
| 4 |
nteract/papermill
|
jupyter
| 479 |
Close notebook after execution
|
When I run `papermill.execute_notebook()` the executed notebook stays in memory and occupies space until I close the parent notebook.
Is there a way to close the notebook after the execution, while the parent notebook is still running?
|
closed
|
2020-03-13T11:54:12Z
|
2020-04-13T16:27:13Z
|
https://github.com/nteract/papermill/issues/479
|
[] |
Nikolai-Hlubek
| 5 |
pyppeteer/pyppeteer
|
automation
| 474 |
websockets.exceptions.ConnectionClosedOK: sent 1000 (OK); then received 1000 (OK) asyncio.exceptions.InvalidStateError: invalid state
|
Python 3.8.19
pyppeteer 1.0.2 2.0.0
场景:访问google时,出现验证码,处理完验证码后,抛出异常
异常堆栈:
connection unexpectedly closed
Task exception was never retrieved
future: <Task finished name='Task-960' coro=<Connection._async_send() done, defined at C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\pyppeteer\connection.py:69> exception=InvalidStateError('invalid state')>
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\pyppeteer\connection.py", line 73, in _async_send
await self.connection.send(msg)
File "C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\websockets\legacy\protocol.py", line 635, in send
await self.ensure_open()
File "C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\websockets\legacy\protocol.py", line 944, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedOK: sent 1000 (OK); then received 1000 (OK)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\pyppeteer\connection.py", line 79, in _async_send
await self.dispose()
File "C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\pyppeteer\connection.py", line 170, in dispose
await self._on_close()
File "C:\ProgramData\Anaconda3\envs\lazbao_work\lib\site-packages\pyppeteer\connection.py", line 151, in _on_close
cb.set_exception(_rewriteError(
asyncio.exceptions.InvalidStateError: invalid state
|
open
|
2024-04-30T02:04:19Z
|
2024-07-11T03:39:49Z
|
https://github.com/pyppeteer/pyppeteer/issues/474
|
[] |
iamdaguduizhang
| 2 |
plotly/dash
|
jupyter
| 3,126 |
How to create a Loading component triggered by an external component
|
Hello,
I want to create a Loading component in a specific location of my page which is triggered by the content in another Div which is not wrapped by the Loading component. Is it possible?
|
closed
|
2025-01-19T19:04:14Z
|
2025-01-23T20:25:19Z
|
https://github.com/plotly/dash/issues/3126
|
[] |
marfago
| 2 |
ipython/ipython
|
data-science
| 14,446 |
memory leakage with matplotlib
|
Running the following code in ipython causes memory leakage. The same thing occurs for both Agg and QtAgg.
```
import numpy as np
import matplotlib.pyplot as plt
import gc
for i in range(5):
fig = plt.figure(num=1, clear=True)
ax = fig.add_subplot()
ax.plot(np.arange(10**7))
gc.collect()
```
Runing the same code in python does not lead to memory leakage.
python version 3.8.19
ipython version 8.12.2
matplotlib version 3.7.2
|
closed
|
2024-05-30T09:29:44Z
|
2024-05-31T12:10:29Z
|
https://github.com/ipython/ipython/issues/14446
|
[] |
SpaceWalker162
| 1 |
Nemo2011/bilibili-api
|
api
| 800 |
[新功能] 作者您好,求支持查询up主所有视频列表功能,感谢
|
作者您好, 您的代码非常好用,集成的非常贴心, 不知道能否支持查询up主左右投稿或者最新投稿功能,谢谢
|
open
|
2024-08-25T11:26:42Z
|
2024-08-26T01:41:02Z
|
https://github.com/Nemo2011/bilibili-api/issues/800
|
[] |
9ihbd2DZSMjtsf7vecXjz
| 0 |
erdewit/ib_insync
|
asyncio
| 64 |
Set Time in Force of orders
|
When placing a LimitOrder using the example in your notebook, it gives a Day order which is cancelled at the close of the trading day. Is it possible to change the Times in Force of the order to Good-Til-Canceled?
|
closed
|
2018-05-01T12:29:50Z
|
2018-05-01T14:15:03Z
|
https://github.com/erdewit/ib_insync/issues/64
|
[] |
nyxynyx
| 1 |
plotly/dash
|
plotly
| 2,720 |
Selenium 4.2.0 Version Vulnerability
|
**Selenium Version Vulnerability**: ```selenium>=3.141.0,<=4.2.0```
using ``` dash 2.14.2```
**Describe the bug**
We are using Synk to scan the dependencies of our project, which is using the latest version of dash. The Synk scan is showing these vulnerabilities (Snyk: [CVSS 7.5](https://security.snyk.io/vuln/SNYK-PYTHON-SELENIUM-6062316) NVD: [CVSS 7.5](https://nvd.nist.gov/vuln/detail/CVE-2023-5590)), as a result of the selenium version being kept below 4.2.0 [here](https://github.com/plotly/dash/blob/dev/requires-testing.txt).
**Expected behavior**
We expect there not to be open high vulnerabilities in the dash application - although they are only exposed through testing.
A suggestion is that this dependency on selenium is either upgraded, or removed from the client-facing installation.
|
open
|
2023-12-28T18:58:55Z
|
2024-08-13T19:44:29Z
|
https://github.com/plotly/dash/issues/2720
|
[
"bug",
"infrastructure",
"sev-1",
"P3"
] |
cbarrett3
| 2 |
django-oscar/django-oscar
|
django
| 3,602 |
order_by not working on SearchQuerySet
|
order_by not working on SearchQuerySet
In FacetedSearchView i am trying to sort/order the queryset in get_results(). But every time it gives me same results
```
class FacetedSearchView(views.FacetedSearchView):
"""
A modified version of Haystack's FacetedSearchView
Note that facets are configured when the ``SearchQuerySet`` is initialised.
This takes place in the search application class.
See https://django-haystack.readthedocs.io/en/v2.1.0/views_and_forms.html#facetedsearchform
""" # noqa
# Haystack uses a different class attribute to CBVs
template = "oscar/search/results.html"
search_signal = user_search
def __call__(self, request):
response = super().__call__(request)
# Raise a signal for other apps to hook into for analytics
self.search_signal.send(
sender=self, session=self.request.session,
user=self.request.user, query=self.query)
return response
# Override this method to add the spelling suggestion to the context and to
# convert Haystack's default facet data into a more useful structure so we
# have to do less work in the template.
def extra_context(self):
extra = super().extra_context()
# Show suggestion no matter what. Haystack 2.1 only shows a suggestion
# if there are some results, which seems a bit weird to me.
if self.results.query.backend.include_spelling:
# Note, this triggers an extra call to the search backend
suggestion = self.form.get_suggestion()
if suggestion != self.query:
extra['suggestion'] = suggestion
# Convert facet data into a more useful data structure
if 'fields' in extra['facets']:
munger = FacetMunger(
self.request.get_full_path(),
self.form.selected_multi_facets,
self.results.facet_counts())
extra['facet_data'] = munger.facet_data()
has_facets = any([len(data['results']) for
data in extra['facet_data'].values()])
extra['has_facets'] = has_facets
# Pass list of selected facets so they can be included in the sorting
# form.
extra['selected_facets'] = self.request.GET.getlist('selected_facets')
return extra
def get_results(self):
# We're only interested in products (there might be other content types
# in the Solr index).
qs = super().get_results().models(Product)
fieldname = self.request.GET.get('order_by', None)
qs = qs.order_by(fieldname)
return qs
```
Settings
```
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
},
}
```
|
closed
|
2020-12-16T06:51:18Z
|
2021-09-08T06:48:57Z
|
https://github.com/django-oscar/django-oscar/issues/3602
|
[] |
pupattan
| 2 |
albumentations-team/albumentations
|
deep-learning
| 1,579 |
[benchmark] Doublecheck that ImgAUG actually runs transforms
|
closed
|
2024-03-12T20:35:35Z
|
2024-03-15T20:27:04Z
|
https://github.com/albumentations-team/albumentations/issues/1579
|
[] |
ternaus
| 0 |
|
python-arq/arq
|
asyncio
| 477 |
GitHub Releases: Date years are off-by-one
|
The years on the dates in the release titles in GitHub releases are off-by-one, e.g. 2023 instead of 2024:

|
open
|
2024-09-03T15:58:16Z
|
2024-09-04T09:17:30Z
|
https://github.com/python-arq/arq/issues/477
|
[] |
dannya
| 2 |
zappa/Zappa
|
django
| 865 |
[Migrated] Add a support for a default AWS event handler
|
Originally from: https://github.com/Miserlou/Zappa/issues/2115 by [jiaaro](https://github.com/jiaaro)
Ticket: #2112
This PR adds a way for users to define a default handler for AWS events. This allows for handling events for sources that don't exist at the time of deployment (like temporary SQS queues) as well as resources managed by systems like terraform whose ARNs are not yet known at the time the Zappa package is built.
Finally, it allows users to handle events from new AWS systems that Zappa has not specifically integrated yet.
|
closed
|
2021-02-20T13:03:02Z
|
2022-08-18T01:45:52Z
|
https://github.com/zappa/Zappa/issues/865
|
[] |
jneves
| 1 |
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 940 |
error when training with DRIVE database
|
i m using DRIVE database. I m getting this error:
File "C:\Users\Dell\Anaconda3\envs\aa\lib\site-packages\PIL\TiffImagePlugin.py", line 1182, in _load_libtiff
raise OSError(err)
OSError: -2
|
open
|
2020-02-28T07:12:14Z
|
2020-02-28T07:12:14Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/940
|
[] |
manvirvirk
| 0 |
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,288 |
AttributeError: from_statement_ctx when using PostgreSQL UPSERT and RETURNING
|
### Describe the bug
Hi, We are running a FastAPI server and it is connected to a Postgres DB instance that is running in Docker. There is a table which we used to store information about our client timeslots for events.
The ORM model:
```python
class Timeslot(Base):
__tablename__ = "timeslot"
id: Mapped[bigintpk]
client_id: Mapped[int] = mapped_column()
begintime: Mapped[datetime] = mapped_column()
endtime: Mapped[datetime] = mapped_column()
...
```
We implement the following constraints on the table:
```sql
ALTER TABLE ...
ADD CONSTRAINT timeslot_no_overlap_constraint EXCLUDE USING gist (
client_id WITH =,
tstzrange(begintime, endtime, '[)'::text) WITH &&);
-- This is probably redundant because it is already cover by the constraint above
ALTER TABLE ...
ADD CONSTRAINT begintime_unique UNIQUE (client_id, begintime);
```
Note: We had an alembic script to execute and create the constraint DDL above (i.e. raw SQL), we handle the detection of constraint conflict via try catch at runtime.
It is expected we have conflict of timeslots insertion frequently, and our use case requires the clients to know about how many slots are actually inserted/skipped. The following code snippet is basically how we achieve it by using Postgres UPSERT functionality.
```python
# router.py
async def async_get_db() -> AsyncGenerator[AsyncSession, None]:
async with async_session() as session:
yield session
@router.post("some-endpoint")
async def handle_endpoint(
db: AsyncSession = Depends(async_get_db),
post_data:list[dict]=Body(...),
):
# some business logic
# .....
result = await sql_utils.bulk_upsert_slots(post_data, db=db)
return result
# sql_utils.py
from sqlalchemy.dialects import postgresql as pg
async def bulk_upsert_slots(mappings: list[dict], *, db: AsyncSession) -> dict:
counter= {"promised": len(mappings), "actual": 0}
upsert_stmt = pg.insert(Timeslot).values(mappings).on_conflict_do_nothing().returning(Timeslot.id)
cursor = await db.scalars(upsert_stmt)
new_timeslot_ids = cursor.all()
await db.commit()
counter.update({"actual": len(new_timeslot_ids)})
return counter
```
I can't recall the docs specify that we can use the `returning` construct with upsert, but we are using it anyways (as show in the code). It seems to work fine in both local and production environment (i.e. the `counter` variable show the stuff we want)
But recently, There was one request which throw an `AttributeError: from_statement_ctx` (from `sqlalchemy.orm.ORMDMLState._return_orm_returning`). See below for the relevant stack trace.
This request was trying to insert more than 1000 rows at once. I tried to reproduce it on my local machine but this error never occurs.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.3
### DBAPI (i.e. the database driver)
asyncpg
### Database Vendor and Major Version
PostgreSQL 15.1
### Python Version
3.10
### Operating system
Rocky Linux
### To Reproduce
```python
# router.py
async def async_get_db() -> AsyncGenerator[AsyncSession, None]:
async with async_session() as session:
yield session
@router.post("some-endpoint")
async def handle_endpoint(
db: AsyncSession = Depends(async_get_db),
post_data:list[dict]=Body(...),
):
# some business logic
# .....
result = await sql_utils.bulk_upsert_slots(post_data, db=db)
return result
# sql_utils.py
from sqlalchemy.ext.asyncio import AsyncSession
async def bulk_upsert_slots(mappings: list[dict], *, db: AsyncSession) -> dict:
counter= {"promised": len(mappings), "actual": 0}
upsert_stmt = pg.insert(Timeslot).values(mappings).on_conflict_do_nothing().returning(Timeslot.id)
cursor = await db.scalars(upsert_stmt)
new_timeslot_ids = cursor.all()
await db.commit()
counter.update({"actual": len(new_timeslot_ids)})
return counter
```
### Error
```
File "/app/backend/features/timeslot/router.py", line 185, in handle_endpoint
result = await sql_utils.bulk_upsert_slots(mappings, db=db)
│ │ │ └ <sqlalchemy.ext.asyncio.session.AsyncSession object at 0x7f8f46cdd0f0>
│ │ └ [{'client_id': 289, 'available': 1, 'begintime': datetime.datetime(2024, 4, 30, 22, 0, tzinfo=datetime.timezone.utc), 'endti...
│ └ <function bulk_upsert_slots at 0x7f8f488623b0>
└ <module 'backend.features.timeslot.sql_utils' from '/app/backend/features/timeslot/sql...
File "/app/backend/features/timeslot/sql_utils.py", line 29, in bulk_upsert_slots
cursor = await db.scalars(upsert_stmt)
│ │ └ <sqlalchemy.dialects.postgresql.dml.Insert object at 0x7f8f374ca1d0>
│ └ <function AsyncSession.scalars at 0x7f8f58cc43a0>
└ <sqlalchemy.ext.asyncio.session.AsyncSession object at 0x7f8f46cdd0f0>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 428, in scalars
result = await self.execute(
│ └ <function AsyncSession.execute at 0x7f8f58cc4280>
└ <sqlalchemy.ext.asyncio.session.AsyncSession object at 0x7f8f46cdd0f0>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 313, in execute
result = await greenlet_spawn(
└ <function greenlet_spawn at 0x7f8f5cae1a20>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 167, in greenlet_spawn
result = context.switch(value)
│ │ └ None
│ └ <method 'switch' of 'greenlet.greenlet' objects>
└ <_AsyncIoGreenlet object at 0x7f8f3715fc00 (otid=0x7f8f5dc43cc0) dead>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2229, in execute
return self._execute_internal(
│ └ <function Session._execute_internal at 0x7f8f58e056c0>
└ <sqlalchemy.orm.session.Session object at 0x7f8f46d0a110>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2124, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
│ │ └ <classmethod(<function BulkORMInsert.orm_execute_statement at 0x7f8f58dd5e10>)>
│ └ <class 'sqlalchemy.orm.bulk_persistence.BulkORMInsert'>
└ typing.Any
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/orm/bulk_persistence.py", line 1232, in orm_execute_statement
return cls._return_orm_returning(
│ └ <classmethod(<function ORMDMLState._return_orm_returning at 0x7f8f58dd55a0>)>
└ <class 'sqlalchemy.orm.bulk_persistence.BulkORMInsert'>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/orm/bulk_persistence.py", line 528, in _return_orm_returning
if compile_state.from_statement_ctx:
└ <sqlalchemy.sql.selectable.SelectState object at 0x7f8f367285e0>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1329, in __getattr__
return self._fallback_getattr(key)
│ │ └ 'from_statement_ctx'
│ └ <function MemoizedSlots._fallback_getattr at 0x7f8f5cab79a0>
└ <sqlalchemy.sql.selectable.SelectState object at 0x7f8f367285e0>
File "/app/venvs/v3/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1298, in _fallback_getattr
raise AttributeError(key)
└ 'from_statement_ctx'
AttributeError: from_statement_ctx
```
### Additional context
_No response_
|
closed
|
2024-04-18T09:13:24Z
|
2024-04-18T13:53:33Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11288
|
[
"postgresql",
"awaiting info"
] |
yoadev22
| 1 |
iperov/DeepFaceLive
|
machine-learning
| 32 |
Вставил Tim_Chrys.dfm в папку dfm_models, запустил bat. пишет:from PyQt6.QtCore import * ImportError: DLL load failed: Не найдена указанная процедура.
|
closed
|
2022-01-18T15:39:10Z
|
2022-01-18T16:03:32Z
|
https://github.com/iperov/DeepFaceLive/issues/32
|
[] |
okato23
| 6 |
|
roboflow/supervision
|
machine-learning
| 956 |
set_classes bugs when I use it in the yolo world model
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
If I use set_classes to set the classes count more than 7, after I convert into coreml, it does not work in M1/M2 macbook.
### Environment
python: 3.11.4
coremltools: 7.0
ultralytics: 8.1.19
### Minimal Reproducible Example
```python
def yolo_world_export():
with torch.no_grad():
# Initialize a YOLO model with pretrained weights
model = YOLO(
"yolov8s-world.pt"
) # You can also choose yolov8m/l-world.pt based on your needs
# Define custom classes specific to your application
custom_classes = [
"girl",
"ball",
"flower",
"vase",
"lavander",
"boy",
"car",
]
model.set_classes(custom_classes)
# Save the model with the custom classes defined (modified code)
model.save(
"custom_yolov8s.pt"
) # This saves extra metadata required for CoreML conversion
# # Load the saved model with custom classes
model = YOLO("custom_yolov8s.pt")
# # Export the model to CoreML format with non-maximum suppression enabled
model.export(format="coreml", nms=True)
```
### Additional
```python
def yolo_world_export():
with torch.no_grad():
# Initialize a YOLO model with pretrained weights
model = YOLO(
"yolov8s-world.pt"
) # You can also choose yolov8m/l-world.pt based on your needs
# Define custom classes specific to your application
custom_classes = [
"girl",
"ball",
"flower",
"vase",
"lavander",
"boy",
"car",
]
model.set_classes(custom_classes)
# Save the model with the custom classes defined (modified code)
model.save(
"custom_yolov8s.pt"
) # This saves extra metadata required for CoreML conversion
# # Load the saved model with custom classes
model = YOLO("custom_yolov8s.pt")
# # Export the model to CoreML format with non-maximum suppression enabled
model.export(format="coreml", nms=True)
```
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-02-28T15:57:47Z
|
2024-02-28T16:15:25Z
|
https://github.com/roboflow/supervision/issues/956
|
[
"bug"
] |
qihuijia
| 1 |
keras-team/keras
|
machine-learning
| 20,158 |
Tensorboard callback is blocking process
|
I am unable to find the transferred issue: https://github.com/keras-team/tf-keras/issues/496
This issue is still occuring and creates performance bottleneck when writing to cloud storage
|
open
|
2024-08-23T18:05:17Z
|
2024-09-05T17:09:59Z
|
https://github.com/keras-team/keras/issues/20158
|
[
"stat:awaiting keras-eng"
] |
rivershah
| 2 |
benbusby/whoogle-search
|
flask
| 243 |
[BUG/FEATURE] Bypass EU cookie consent
|
**Describe the bug**
I use proxy setup that rotates the proxing in the background and from time to time, when I connect to EU in this particular instance DE, whoogle will display a "Cookie consent" form from google.
After reloading the pages 2-3 times it vanishes, nonetheless it shows up every ~1-3 request via the addressbar search.
**To Reproduce**
Steps to reproduce the behavior:
1. Add you instance to your browser, in my case ungoogled-chromium
2. Either proxy or VPN the instance to the EU(DE)
3. Perform a serach via the url bar
4. See error
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [X] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [X] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: [Win/Linux]
- Browser [ungoogled-chromium]
- Version [88.0.4324.182]
**Additional context**

|
closed
|
2021-03-30T02:18:03Z
|
2021-04-07T14:21:36Z
|
https://github.com/benbusby/whoogle-search/issues/243
|
[
"bug"
] |
Suika
| 26 |
httpie/cli
|
rest-api
| 727 |
JavaScript is disabled
|
I use httpie to access hackerone.com, prompting that javascript needs to be turned on
> [root@localhost ~]# http https://hackerone.com/monero/hacktivity?sort_type=latest_disclosable_activity_at\&filter=type%3Aall%20to%3Amonero&page=1
> It looks like your JavaScript is disabled. To use HackerOne, enable JavaScript in your browser and refresh this page.
How to fix it?
|
closed
|
2018-11-06T05:50:09Z
|
2018-11-06T05:57:17Z
|
https://github.com/httpie/cli/issues/727
|
[] |
linkwik
| 1 |
CPJKU/madmom
|
numpy
| 94 |
move norm_observations out of observation models
|
If needed, the normalisation can be performed in the beat tracking classes before the observations are passed to the Viterbi algorithm.
|
closed
|
2016-02-19T14:47:24Z
|
2016-02-22T08:20:58Z
|
https://github.com/CPJKU/madmom/issues/94
|
[] |
superbock
| 2 |
pytest-dev/pytest-selenium
|
pytest
| 320 |
Compatibility with pytest 8.0
|
Hi,
Since the release of pytest 8.0, a pytest internal error is raised while launching tests:
```
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/main.py", line 272, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/main.py", line 326, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 138, in _multicall
INTERNALERROR> raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 121, in _multicall
INTERNALERROR> teardown.throw(exception) # type: ignore[union-attr]
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/logging.py", line 796, in pytest_runtestloop
INTERNALERROR> return (yield) # Run all the tests.
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/main.py", line 351, in pytest_runtestloop
INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 181, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_result.py", line 99, in get_result
INTERNALERROR> raise exc.with_traceback(exc.__traceback__)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 166, in _multicall
INTERNALERROR> teardown.throw(outcome._exception)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/warnings.py", line 109, in pytest_runtest_protocol
INTERNALERROR> return (yield)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 166, in _multicall
INTERNALERROR> teardown.throw(outcome._exception)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/assertion/__init__.py", line 174, in pytest_runtest_protocol
INTERNALERROR> return (yield)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 166, in _multicall
INTERNALERROR> teardown.throw(outcome._exception)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/unittest.py", line 408, in pytest_runtest_protocol
INTERNALERROR> res = yield
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 166, in _multicall
INTERNALERROR> teardown.throw(outcome._exception)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/faulthandler.py", line 85, in pytest_runtest_protocol
INTERNALERROR> return (yield)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/runner.py", line 114, in pytest_runtest_protocol
INTERNALERROR> runtestprotocol(item, nextitem=nextitem)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/runner.py", line 133, in runtestprotocol
INTERNALERROR> reports.append(call_and_report(item, "call", log))
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/_pytest/runner.py", line 228, in call_and_report
INTERNALERROR> report: TestReport = hook.pytest_runtest_makereport(item=item, call=call)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 155, in _multicall
INTERNALERROR> teardown[0].send(outcome)
INTERNALERROR> File "/root/venv/lib/python3.10/site-packages/pytest_selenium/pytest_selenium.py", line 256, in pytest_runtest_makereport
INTERNALERROR> exclude = item.config.getini("selenium_exclude_debug").lower()
INTERNALERROR> AttributeError: 'NoneType' object has no attribute 'lower'
```
After investigating it appears that the cause is in a change of behaviour in `parser.addini` (cf [pytest changelog](https://docs.pytest.org/en/stable/changelog.html#other-breaking-changes)). The current implementation clearly needs default value to be a string (due to `lower` method). But since Pytest 8.0, the default value is `None` (second bullet point of pytest documentation).
|
closed
|
2024-02-01T10:20:15Z
|
2024-02-01T15:01:28Z
|
https://github.com/pytest-dev/pytest-selenium/issues/320
|
[] |
sandre35
| 0 |
Yorko/mlcourse.ai
|
data-science
| 641 |
Docker image Kernel error
|
Windows 10
Jupyter starts, although it throws 'Kernel error' and cannot start python kernel.
[...]
File "/opt/conda/lib/python3.6/site-packages/jupyter_core/paths.py", line 412, in secure_write
.format(file=fname, permissions=os.stat(fname).st_mode))
RuntimeError: Permissions assignment failed for secure file: '/notebooks/home/.local/share/jupyter/runtime/kernel-289a0297-fc62-4f53-bbe5-daaccaa23f33.json'.Got '33261' instead of '600'
|
closed
|
2019-11-01T15:03:24Z
|
2019-11-04T14:09:40Z
|
https://github.com/Yorko/mlcourse.ai/issues/641
|
[
"enhancement"
] |
elgator
| 2 |
graphql-python/graphene-sqlalchemy
|
sqlalchemy
| 70 |
Question about querying a single resource with flask sqlalchemy example.
|
I have followed the example at http://docs.graphene-python.org/projects/sqlalchemy/en/latest/tutorial/#testing-our-graphql-schema and have a question.
I can issue the following query to list all employees and their departments.
```
{
allEmployees {
edges {
node {
name
department {
name
}
}
}
}
}
```
I would like to be able to query a single employee name and their department name by employee id. I can query using the Node interface, but that doesn't allow me to access Employee name field. Am I supposed to "cast" this to the specific Employee type to do that? What I would like is something like:
```
{
employee(id: "someid") {
name
department {
name
}
}
```
Is this reasonable or am I "doing it wrong"? What is best practice for accessing a single employee using the Relay connections/nodes/edges paradigm?
Many thanks in advance!
|
open
|
2017-08-23T18:13:49Z
|
2019-07-26T06:02:19Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/70
|
[] |
mojochao
| 7 |
chatopera/Synonyms
|
nlp
| 73 |
查“小康”的近义词出现了HTML标签和英文单词
|
synonyms.display('小康')
'小康'近义词:
1. 小康:1.0
2. 新时期:0.505846
3. 中心镇:0.471591
4. 书香:0.471404
5. 家境贫寒:0.471054
6. 下乡:0.443652
7. 三代人:0.424502
8. 玉溪:0.416268
9. </s>:0.0780543
10. Pardosa:0.0624822
|
closed
|
2018-12-22T01:40:30Z
|
2019-04-21T01:31:53Z
|
https://github.com/chatopera/Synonyms/issues/73
|
[] |
universe-st
| 1 |
JaidedAI/EasyOCR
|
pytorch
| 623 |
Does training the model with one letter images helps ?
|

I saw on the 1000 images that easyocr team gave us to test the training of the model that they use one or two words for each image. Is it a bad idea to give the model letters to train if, at the ends, we want to predict words ?
If I have the image with "there is a game", should I only create the images "there", "is" and "game" ?
Actually I'm creating the images:
"there is a game", "there", "is", "a", "game", "t", "h", "e", "r", "e", "i", "s", "a", "g", "a", "m", "e"
I did some test with and without the letters, I passed from 76 to 78% of accuracy but this is kinda biased because I have less than 1500 images to train so I'm hopping to find someone that already asked himself this question and could help me understand.
I feel that it could help by understanding the concept of letters in the words but maybe giving the letter the same size of a word in the dataset makes it useless or some little tips like this.
|
closed
|
2021-12-15T16:42:04Z
|
2022-08-07T05:01:27Z
|
https://github.com/JaidedAI/EasyOCR/issues/623
|
[] |
AvekIA
| 0 |
profusion/sgqlc
|
graphql
| 58 |
Cannot un-pickle a pickled Operation instance 🥒
|
Hey, I'm using sgqlc to introspect a schema and then generate queries from it and it is going well. However, I'm trying to parallelize tasks that contain an operation, which requires pickling it inside the task.
Here is the traceback that arises when trying to deserialize the (successfully) serialized op:
```
Traceback (most recent call last):
File "/Users/myuser/myproj/venv/lib/python3.7/site-packages/sgqlc/types/__init__.py", line 657, in __getattr__
return self.__kinds[key] # .type, .scalar, etc...
File "/Users/myuser/myproj/venv/lib/python3.7/site-packages/sgqlc/types/__init__.py", line 657, in __getattr__
return self.__kinds[key] # .type, .scalar, etc...
File "/Users/myuser/myproj/venv/lib/python3.7/site-packages/sgqlc/types/__init__.py", line 657, in __getattr__
return self.__kinds[key] # .type, .scalar, etc...
[Previous line repeated 486 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object
```
I can reproduce this like so (using the generated python schema from sgqlc):
```python
import my_python_schema
from cloudpickle import dumps, loads
schema_query = my_python_schema.Query
op = Operation(schema_query)
pickled_op = dumps(op) # works
unpickled_op = loads(pickled_op) # blows up with traceback above
```
Are there any options I'm missing that could maybe pre-flatten the types or specify the max depth?
Thanks for the sweet project 🙂
|
open
|
2019-07-31T23:52:16Z
|
2024-03-22T11:57:33Z
|
https://github.com/profusion/sgqlc/issues/58
|
[
"enhancement",
"waiting-input"
] |
chartpath
| 6 |
aminalaee/sqladmin
|
sqlalchemy
| 491 |
URLType field has no converter defined
|
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
When a URLType field is added to a model, "edit" on the record causes an exception:
from sqlalchemy_fields.types import URLType
class MyModel(Base):
image_url = Column(URLType(255))
...
sqladmin.exceptions.NoConverterFound: Could not find field converter for column image_url (<class 'sqlalchemy_fields.types.url.URLType'>).
### Steps to reproduce the bug
1. Create a model with a URLType field
2. Add a sqladmin ModelView for that model
3. Display the list of objects
4. Select the "edit" icon for the object
### Expected behavior
I would expect to see the default "Edit" view with the URL field editable
### Actual behavior
sqladmin.exceptions.NoConverterFound thrown
### Debugging material
Traceback (most recent call last):
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 443, in handle
await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/authentication.py", line 60, in wrapper_decorator
return await func(*args, **kwargs)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/application.py", line 480, in edit
Form = await model_view.scaffold_form()
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/models.py", line 1021, in scaffold_form
return await get_model_form(
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/forms.py", line 586, in get_model_form
field = await converter.convert(
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/forms.py", line 312, in convert
converter = self.get_converter(prop=prop)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/forms.py", line 266, in get_converter
raise NoConverterFound( # pragma: nocover
sqladmin.exceptions.NoConverterFound: Could not find field converter for column image_url (<class 'sqlalchemy_fields.types.url.URLType'>).
### Environment
- MacOS / Python 3.9
### Additional context
_No response_
|
closed
|
2023-05-11T18:21:30Z
|
2023-05-11T19:58:31Z
|
https://github.com/aminalaee/sqladmin/issues/491
|
[] |
dhait
| 2 |
dask/dask
|
scikit-learn
| 11,160 |
Can not process datasets created by the older version of Dask
|
**Describe the issue**:
After upgrading the Dask from `2023.9.3` to the latest version `2024.5.2` or `2024.4.1`, we can not load the existing parquet files created by the previous version.
I'm getting an error during `to_parquet` operation (when `dask-expr` is enabled):
```
.../.venv/lib/python3.10/site-packages/dask_expr/_collection.py:301: UserWarning: Dask annotations {'retries': 5} detected. Annotations will be ignored when using query-planning.
warnings.warn(
Traceback (most recent call last):
File ".../demand_forecasting/dask/data.py", line 292, in _write_to_gcs
ddf.to_parquet(url, **kwargs)
File ".../.venv/lib/python3.10/site-packages/dask_expr/_collection.py", line 3266, in to_parquet
return to_parquet(self, path, **kwargs)
File ".../.venv/lib/python3.10/site-packages/dask_expr/io/parquet.py", line 653, in to_parquet
out = out.compute(**compute_kwargs)
File ".../.venv/lib/python3.10/site-packages/dask_expr/_collection.py", line 476, in compute
return DaskMethodsMixin.compute(out, **kwargs)
File ".../.venv/lib/python3.10/site-packages/dask/base.py", line 375, in compute
(result,) = compute(self, traverse=False, **kwargs)
File ".../.venv/lib/python3.10/site-packages/dask/base.py", line 661, in compute
results = schedule(dsk, keys, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/dask/dataframe/dispatch.py", line 68, in concat
File "/opt/conda/lib/python3.10/site-packages/dask/dataframe/backends.py", line 688, in concat_pandas
File "/opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py", line 489, in _get_ndims
TypeError: cannot concatenate object of type '<class 'tuple'>'; only Series and DataFrame objs are valid
```
Disabling the `dask-expr` leads to different error during repartition operation:
```
Traceback (most recent call last):
File ".../dataset/partition.py", line 142, in repartition
ddf = ddf.repartition(partition_size=partition_size)
File ".../.venv/lib/python3.10/site-packages/dask/dataframe/core.py", line 1802, in repartition
return repartition_size(self, partition_size)
File ".../.venv/lib/python3.10/site-packages/dask/dataframe/core.py", line 8104, in repartition_size
mem_usages = df.map_partitions(total_mem_usage, deep=True).compute()
File ".../.venv/lib/python3.10/site-packages/dask/base.py", line 375, in compute
(result,) = compute(self, traverse=False, **kwargs)
File ".../.venv/lib/python3.10/site-packages/dask/base.py", line 661, in compute
results = schedule(dsk, keys, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/dask/dataframe/shuffle.py", line 839, in partitioning_index
File "/opt/conda/lib/python3.10/site-packages/dask/dataframe/backends.py", line 523, in hash_object_pandas
File "/opt/conda/lib/python3.10/site-packages/pandas/core/common.py", line 573, in require_length_match
ValueError: Length of values (0) does not match length of index (12295809)
```
The dataset consists of parquet 3 files (e.g., `dataset.parquet/part.X.parquet`) with the following Dask dtypes:
```
Col1, Int64
Col2, Int16
Col3, Int32
Col4, datetime64[us]
```
Pandas dtypes:
```
Col1, int64[pyarrow]
Col2, int16[pyarrow]
Col3, int32[pyarrow]
Col4, timestamp[ns][pyarrow]
```
The index is named as `__null_dask_index__`.
The important observation is that **TypeError error disappears** if I take only part of the dataset as follows:
```python
ddf.loc[:100000]
```
However, disabling the `dask-expr` still leads to an error:
```
ValueError: Length of values (0) does not match length of index (100001)
```
**Minimal Complete Verifiable Example**:
1. Use any parquet data
2. Try to shuffle using a not existing column and export the data:
```
ddf = (dd
.read_parquet('gs://.../....parquet')
.shuffle(on='does_not_exist', npartitions=64)
.repartition(partition_size='100MB')
.to_parquet('data_test.parquet'))
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.5.2
- Python version: 3.10
- Operating System: WSL, Ubuntu 22.04
- Install method (conda, pip, source): poetry
```
pandas==2.2.2
pyarrow==14.0.2
```
|
open
|
2024-06-04T08:02:39Z
|
2024-06-13T06:24:03Z
|
https://github.com/dask/dask/issues/11160
|
[
"needs triage"
] |
dbalabka
| 9 |
MaartenGr/BERTopic
|
nlp
| 1,419 |
Bertopic soemtimes gives me topic that are not mutually exclusive..
|



As seen above, topic 7 is included in topic 0, as also topic 21 and topic 26, as also topic 23 and topic 25, which makes the topic interpretation confusing.
I have no idea why it occurs so or what I can do with it. Is that possible to alleviate this? I tried a lot of hyperparameter tuning, but this phenomenon hangs for quite a while. @MaartenGr Any suggestion for me?
```
from sklearn.feature_extraction.text import TfidfVectorizer
from gensim.models.coherencemodel import CoherenceModel
from bertopic.vectorizers import ClassTfidfTransformer
from sentence_transformers import SentenceTransformer
from bertopic.representation import KeyBERTInspired
from bertopic import BERTopic
from hdbscan import HDBSCAN
from umap import UMAP
import gensim.corpora as corpora
import pandas as pd
import wandb
import os
path_output = os.path.join(os.getcwd(), 'Result', 'RQ1', 'Special Topics')
path_model = os.path.join(os.getcwd(), 'Code', 'RQ1', 'Special Topic Modeling', 'Model')
if not os.path.exists(path_model):
os.makedirs(path_model)
wandb_project = 'asset-management-topic-modeling'
os.environ["WANDB_API_KEY"] = XXXXX
os.environ["TOKENIZERS_PARALLELISM"] = "true"
os.environ["WANDB__SERVICE_WAIT"] = "100"
# set default sweep configuration
config_defaults = {
# Refer to https://www.sbert.net/docs/pretrained_models.html
'model_name': 'all-mpnet-base-v2',
'metric_distane': 'manhattan',
'calculate_probabilities': True,
'reduce_frequent_words': True,
'prediction_data': True,
'low_memory': False,
'random_state': 42,
'ngram_range': 2,
}
config_sweep = {
'method': 'grid',
'metric': {
'name': 'Coherence CV',
'goal': 'maximize'
},
'parameters': {
'n_components': {
'values': [3, 4, 5, 6, 7],
},
}
}
class TopicModeling:
def __init__(self, topic_type, min_cluster_size=20):
# Initialize an empty list to store top models
self.top_models = []
self.path_model = path_model
df = pd.read_json(os.path.join(path_output, 'preprocessed.json'))
if topic_type == 'anomaly':
df = df[df['Challenge_type'] == 'anomaly']
self.docs = df[df['Challenge_summary'] != 'na']['Challenge_summary'].tolist() + df[df['Challenge_root_cause'] != 'na']['Challenge_root_cause'].tolist()
elif topic_type == 'solution':
self.docs = df[df['Solution'] != 'na']['Solution'].tolist()
config_defaults['min_cluster_size'] = min_cluster_size
config_sweep['name'] = topic_type
config_sweep['parameters']['min_samples'] = {
'values': list(range(1, config_defaults['min_cluster_size'] + 1))
}
def __train(self):
# Initialize a new wandb run
with wandb.init() as run:
# update any values not set by sweep
run.config.setdefaults(config_defaults)
# Step 1 - Extract embeddings
embedding_model = SentenceTransformer(run.config.model_name)
# Step 2 - Reduce dimensionality
umap_model = UMAP(n_components=wandb.config.n_components, metric=run.config.metric_distane,
random_state=run.config.random_state, low_memory=run.config.low_memory)
# Step 3 - Cluster reduced embeddings
hdbscan_model = HDBSCAN(min_cluster_size=run.config.min_cluster_size,
min_samples=wandb.config.min_samples, prediction_data=run.config.prediction_data)
# Step 4 - Tokenize topics
vectorizer_model = TfidfVectorizer(ngram_range=(1, run.config.ngram_range))
# Step 5 - Create topic representation
ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=run.config.reduce_frequent_words)
# Step 6 - Fine-tune topic representation
representation_model = KeyBERTInspired()
# All steps together
topic_model = BERTopic(
embedding_model=embedding_model,
umap_model=umap_model,
hdbscan_model=hdbscan_model,
vectorizer_model=vectorizer_model,
ctfidf_model=ctfidf_model,
representation_model=representation_model,
calculate_probabilities=run.config.calculate_probabilities
)
topics, _ = topic_model.fit_transform(self.docs)
# Preprocess Documents
documents = pd.DataFrame({"Document": self.docs,
"ID": range(len(self.docs)),
"Topic": topics})
documents_per_topic = documents.groupby(
['Topic'], as_index=False).agg({'Document': ' '.join})
cleaned_docs = topic_model._preprocess_text(
documents_per_topic.Document.values)
# Extract vectorizer and analyzer from BERTopic
vectorizer = topic_model.vectorizer_model
analyzer = vectorizer.build_analyzer()
# Extract features for Topic Coherence evaluation
tokens = [analyzer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(
topic)] for topic in range(len(set(topics))-1)]
coherence_cv = CoherenceModel(
topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_v'
)
coherence_umass = CoherenceModel(
topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='u_mass'
)
coherence_cuci = CoherenceModel(
topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_uci'
)
coherence_cnpmi = CoherenceModel(
topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_npmi'
)
coherence_cv = coherence_cv.get_coherence()
wandb.log({'Coherence CV': coherence_cv})
wandb.log({'Coherence UMASS': coherence_umass.get_coherence()})
wandb.log({'Coherence UCI': coherence_cuci.get_coherence()})
wandb.log({'Coherence NPMI': coherence_cnpmi.get_coherence()})
number_topics = topic_model.get_topic_info().shape[0] - 1
wandb.log({'Topic Number': number_topics})
wandb.log(
{'Uncategorized Post Number': topic_model.get_topic_info().at[0, 'Count']})
model_name = f'{config_sweep["name"]}_{run.id}'
topic_model.save(os.path.join(self.path_model, model_name))
def sweep(self):
wandb.login()
sweep_id = wandb.sweep(config_sweep, project=wandb_project)
wandb.agent(sweep_id, function=self.__train)
```
|
open
|
2023-07-21T04:53:43Z
|
2023-07-21T07:58:16Z
|
https://github.com/MaartenGr/BERTopic/issues/1419
|
[] |
zhimin-z
| 5 |
plotly/plotly.py
|
plotly
| 4,789 |
Export default module in plotly-renderer.js
|
Hi There!
We are using `jupyter-plotly` to support plotly plots in [mystmd's web themes](https://mystmd.org) to enable people publishing jupyter notebooks with plotly outputs to have their plots displayed and active in their websites.
To achieve this we currently have to patch plotly-renderer.js to add a default export. See: https://github.com/jupyter-book/myst-theme/blob/3a1b70b6f2a6b827effb60891f0e693c9bf65e05/patches/jupyterlab-plotly%2B5.18.0.patch
We'd love to avoid the patch but appreciate the patch we are doing might be good for our case but break other things. If you have advice on how to change our usage, which is pretty simple (see: https://github.com/jupyter-book/myst-theme/blob/3a1b70b6f2a6b827effb60891f0e693c9bf65e05/packages/jupyter/src/plotly.ts#L40) without the need for any changes orthe patch, we'd love to go that way too.
Thanks!
|
open
|
2024-10-09T08:41:02Z
|
2024-10-10T16:28:25Z
|
https://github.com/plotly/plotly.py/issues/4789
|
[
"feature",
"P3"
] |
stevejpurves
| 0 |
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 183 |
B站来的,感谢大佬,献出star
|
**System information**
* Have I written custom code:
* OS Platform(e.g., window10 or Linux Ubuntu 16.04):
* Python version:
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3):
* Use GPU or not:
* CUDA/cuDNN version(if you use GPU):
* The network you trained(e.g., Resnet34 network):
**Describe the current behavior**
**Error info / logs**
|
closed
|
2021-03-15T12:37:22Z
|
2021-03-16T01:37:47Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/183
|
[] |
littledeep
| 1 |
talkpython/data-driven-web-apps-with-flask
|
sqlalchemy
| 4 |
Pre-create db folder for users
|
Some users are reporting this error because they have not created the db folder first. Not sure whether this was missed or just not emphasized enough in the course but it's no big deal to create that folder in the starter code to help smooth things over:
```
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
(Background on this error at: (link: http://sqlalche.me/e/e3q8) sqlalche.me/e/e3q8)
```
During Lesson 09 > Creating tables.
Could be an issue on my end, but the only way I found to make it work is to create the db folder myself.
|
closed
|
2019-07-30T15:17:47Z
|
2019-07-30T15:18:16Z
|
https://github.com/talkpython/data-driven-web-apps-with-flask/issues/4
|
[] |
mikeckennedy
| 0 |
sinaptik-ai/pandas-ai
|
pandas
| 646 |
Raising `LLMResponseHTTPError` when a remote provider responded with an error code
|
### 🚀 The feature
I've accidentally set wrong HF token when i was using `Starcoder` model. Then, i couldn't realize what the problem was that since [`query()` methods expects only a proper reponse](https://github.com/gventuri/pandas-ai/blob/f9facf383b6aff8e92065720a4719c5de11dc696/pandasai/llm/base.py#L335), sure, it had been failing when trying to get first item of the response JSON:
```python
return response.json()[0]["generated_text"]
```
Even worse, the exception was caught [right here](https://github.com/gventuri/pandas-ai/blob/f9facf383b6aff8e92065720a4719c5de11dc696/pandasai/smart_datalake/__init__.py#L377), so, the only message i saw was:
```
Unfortunately, I was not able to answer your question, because of the following error:
0
```
_note:_ 0 is the index when getting first item in array from JSON-response i guess
### Motivation, pitch
I'd be good to raise error when LLM provider responses with error HTTP code, i believe.
### Alternatives
We can just log the error, but i doubt it would help to handle the problem.
### Additional context
_No response_
|
closed
|
2023-10-14T20:52:16Z
|
2023-10-17T12:38:44Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/646
|
[] |
nautics889
| 0 |
xorbitsai/xorbits
|
numpy
| 586 |
BUG: string accessor does not support getitem
|
### Describe the bug
A clear and concise description of what the bug is.
```
Traceback (most recent call last):
File "/data/python/study/analysis_year_xorb.py", line 49, in <module>
df_xiang = df[df['vehicleId'].str[0] == '湘']
~~~~~~~~~~~~~~~~~~~^^^
TypeError: 'StringAccessor' object is not subscriptable
```
|
closed
|
2023-07-10T04:44:10Z
|
2023-07-10T08:46:29Z
|
https://github.com/xorbitsai/xorbits/issues/586
|
[
"bug"
] |
qinxuye
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.