repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
KaiyangZhou/deep-person-reid
|
computer-vision
| 74 |
Could you please share some hyperparameters for Resnet-50 in several datasets?
|
I am really grateful towards your project and I am just starting Person-Reid Project as a fresh man.
However, except some setting you shared or default settings, the Models(Resnet50) I tranined have a great distances worse than benchmark.
Could please share the hyperparameters of Resnet50 with xent loss in Market1501 | CUHK03 | DukeMTMC-reID | MSMT17 datasets ?
I am really appreciated if you can reply to me.
|
closed
|
2018-10-29T06:48:25Z
|
2018-11-11T17:38:23Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/74
|
[] |
wzziqian
| 6 |
psf/requests
|
python
| 6,228 |
Explain the status and the future of the library (in PR template, README, docs site)
|
Attempting to file a new feature request shows the text:
> Requests is not accepting feature requests at this time.
Fair enough, there must be reason for that (e.g. lack of maintainers).
However, maybe explain a bit more - if new features are not accepted, what is the future of the library in general? You could:
- Pin a ticket on the issue tracker
- Add a note to the README
- And/or maybe to the docs site
Current state leaves people (at least me) searching/googling for the status and explanation, yet nothing is found?
|
open
|
2022-09-03T15:12:06Z
|
2023-04-06T19:38:02Z
|
https://github.com/psf/requests/issues/6228
|
[] |
tuukkamustonen
| 3 |
aiortc/aioquic
|
asyncio
| 170 |
Creating Multiple streams from Client
|
What I see in the current implementation is when you create a stream from the client-side by calling:
https://github.com/aiortc/aioquic/blob/c99b43f4c7a1903e9ab2431932395bb7e0b29232/src/aioquic/asyncio/protocol.py#L63-L75
You get the stream_id and create a reader, writer object, however, you don't create a new entry in QuicConnection._streams. When a new stream is created it gets the same stream_id of 0 because there are no entries in QuicConnection._streams. I believe the create_stream should also call:
https://github.com/aiortc/aioquic/blob/c99b43f4c7a1903e9ab2431932395bb7e0b29232/src/aioquic/quic/connection.py#L1157-L1197
|
closed
|
2021-03-09T16:35:58Z
|
2022-08-09T16:58:49Z
|
https://github.com/aiortc/aioquic/issues/170
|
[
"stale"
] |
dimitsqx
| 3 |
statsmodels/statsmodels
|
data-science
| 8,752 |
I wrote a customized state space class but could not figure out the error. Please help.
|
Statsmodels - 0.14.0.dev535
@ChadFulton Could you please help me with this error? Thanks!
```
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_46580/847275192.py in <module>
----> 1 two_factor_fit = two_factor_mod.fit()
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in fit(self, start_params, transformed, includes_fixed, cov_type, cov_kwds, method, maxiter, full_output, disp, callback, return_params, optim_score, optim_complex_step, optim_hessian, flags, low_memory, **kwargs)
726 else:
727 func = self.smooth
--> 728 res = func(mlefit.params, transformed=False, includes_fixed=False,
729 cov_type=cov_type, cov_kwds=cov_kwds)
730
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in smooth(self, params, transformed, includes_fixed, complex_step, cov_type, cov_kwds, return_ssm, results_class, results_wrapper_class, **kwargs)
887
888 # Wrap in a results object
--> 889 return self._wrap_results(params, result, return_ssm, cov_type,
890 cov_kwds, results_class,
891 results_wrapper_class)
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in _wrap_results(self, params, result, return_raw, cov_type, cov_kwds, results_class, wrapper_class)
786 wrapper_class = self._res_classes['fit'][1]
787
--> 788 res = results_class(self, params, result, **result_kwargs)
789 result = wrapper_class(res)
790 return result
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in __init__(self, model, params, results, cov_type, cov_kwds, **kwargs)
2316 self.param_names = [
2317 '%s (fixed)' % name if name in self.fixed_params else name
-> 2318 for name in (self.data.param_names or [])]
2319
2320 # Save the state space representation output
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\base\data.py in param_names(self)
354 def param_names(self):
355 # for handling names of 'extra' parameters in summary, etc.
--> 356 return self._param_names or self.xnames
357
358 @param_names.setter
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
|
open
|
2023-03-22T07:15:49Z
|
2023-03-22T22:38:23Z
|
https://github.com/statsmodels/statsmodels/issues/8752
|
[
"comp-tsa-statespace",
"question"
] |
YueHaiming
| 4 |
CPJKU/madmom
|
numpy
| 443 |
TempoDetector batch processing hangs on bad input
|
### Expected behaviour
Run
tempodetector --mirex batch 105247.mp3
It should finish processing, no matter what.
The attached test file is from [FMA_medium](https://github.com/mdeff/fma).
### Actual behaviour
It prints a message that the file could not be read (which is correct), but then simply hangs.
I'll provide a fix in a PR shortly.
[105247.mp3.zip](https://github.com/CPJKU/madmom/files/3440586/105247.mp3.zip)
|
closed
|
2019-07-29T06:47:38Z
|
2019-07-29T07:49:15Z
|
https://github.com/CPJKU/madmom/issues/443
|
[] |
hendriks73
| 0 |
graphql-python/graphene
|
graphql
| 769 |
Cannot make optional mutation argument
|
According to the docs args are optional by default:
```python
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float()
```
Here is a query:
```graphql
result = authed_graphql_client.execute(
'''
mutation($msisdn: String!, $networkId: String!, $productId: String!) {
rechargeSim(msisdn: $msisdn,
networkId: $networkId,
productId: $productId) {
rechargeId
message
}
}
''',
variable_values={
'msisdn': sim.msisdn,
'networkId': network_id,
'productId': product_id
}
)
```
I get this error:
`TypeError: mutate() missing 1 required positional argument: 'airtime_amount' `
I have tried the following variations:
```python
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float(required=False)```
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float(default_value=None)```
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float(required=False, default_value=None)```
```
In the end I have hacked around this by defaulting the value to 0 and then resetting it to None in the resolver if 0.
|
closed
|
2018-06-14T08:51:32Z
|
2020-05-05T15:54:41Z
|
https://github.com/graphql-python/graphene/issues/769
|
[] |
lee-flickswitch
| 7 |
waditu/tushare
|
pandas
| 1,387 |
huobi数字货币分钟 btc数据为空
|
接口:coin_bar
火币报错,okex 有数据
start_date='2020-04-01 00:00:01',
end_date='2020-04-22 19:00:00' 火币这个日期也没有数据。
ID:376461
|
open
|
2020-06-29T08:10:04Z
|
2020-06-29T08:10:04Z
|
https://github.com/waditu/tushare/issues/1387
|
[] |
Jackyxiaocc
| 0 |
BeanieODM/beanie
|
pydantic
| 421 |
[BUG] `save_changes()` doesn't throw an error when document doesn't exists
|
**Describe the bug**
In 1.11.9 state was changed and always defaults being not None, instead it got a dict with the default values. This causes `check_if_state_saved` to never throw an error.
`save_changes` on a new created document that isn't saved in the database silently does nothing.
**To Reproduce**
```python
user_data = UserEntry(...)
user_data.x = ...
await user_data.save_changes()
```
**Expected behavior**
`StateNotSaved("No state was saved")`
**Additional context**
https://github.com/roman-right/beanie/compare/1.11.8...1.11.9
https://canary.discord.com/channels/822196934973456394/822196935435747332/1042243293662158970
|
closed
|
2022-11-16T13:09:20Z
|
2023-01-04T02:30:38Z
|
https://github.com/BeanieODM/beanie/issues/421
|
[
"Stale"
] |
Luc1412
| 4 |
microsoft/Bringing-Old-Photos-Back-to-Life
|
pytorch
| 210 |
The model runs independently, which can increase the speed!!
|
As we all know, the project is divided into four steps. When I run the model of each step independently, the speed is significantly improved.
|
closed
|
2021-12-24T09:53:44Z
|
2022-02-25T11:01:12Z
|
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/210
|
[] |
davaram
| 1 |
SYSTRAN/faster-whisper
|
deep-learning
| 712 |
Question about Tensor Input Size Changes in Version 1.0.0
|
Hello developers.
I appreciate all your efforts to improve this software.
Now, I noticed that the transcription behavior has changed a lot in version 1.0.0.
I found out that the size of the Tensor input to the model is different. In other words, encode output is different from the previous one, so the result of generate is also different. This may be related to the quality of the transcription.
The following code from openai's Whisper shows that the last dimension of mel_segment is padded to be N_FRAMES.
https://github.com/openai/whisper/blob/ba3f3cd54b0e5b8ce1ab3de13e32122d0d5f98ab/whisper/transcribe.py#L276
Therefore, I wonder if the same process as the function pad_or_trim is needed in this repository?
Note:
The environment I checked is as follows.
OS: Windows 10
Python: 3.9.13
|
closed
|
2024-02-24T01:47:00Z
|
2024-11-14T13:48:55Z
|
https://github.com/SYSTRAN/faster-whisper/issues/712
|
[] |
kale4eat
| 4 |
jina-ai/clip-as-service
|
pytorch
| 614 |
How can I run the server from source rather than install from PyPi?
|
**Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
- TensorFlow installed from (source or binary): binary (pip)
- TensorFlow version: 2.3
- Python version: 3.8.2
- `bert-as-service` version: 1.10
- GPU model and memory: nvidia
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start -model_dir ~/workdir/bert-models/uncased_L-12_H-768_A-12 -num_worker=4
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode()
```
Then this issue shows up:
When I launch the server, I get `AttributeError: module 'tensorflow' has no attribute 'logging' bert as service`. I see that this has been fixed in master, but it is not promoted to PyPi. How can I run bert-as-service server from source code as opposed to installing it from `pip`? thank you
...
|
open
|
2020-12-21T22:40:19Z
|
2021-04-14T10:33:48Z
|
https://github.com/jina-ai/clip-as-service/issues/614
|
[] |
teaxio
| 2 |
OpenBB-finance/OpenBB
|
machine-learning
| 6,777 |
[🕹️] Write a Article Comparing OpenBB and Other Financial Tools
|
### What side quest or challenge are you solving?
Write a Article Comparing OpenBB and Other Financial Tools
### Points
300
### Description
15-October-2024 by Neha Prasad » https://nehaprasad27118.medium.com/openbb-vs-proprietary-financial-tools-the-case-for-open-source-in-finance-9563320ff4cd:
### Provide proof that you've completed the task

|
closed
|
2024-10-15T06:45:59Z
|
2024-10-20T19:01:33Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6777
|
[] |
naaa760
| 8 |
healthchecks/healthchecks
|
django
| 527 |
Make monitoring of healthchecks.io easier by instrumenting metrics / traces.
|
Hi, I would like to instrument healthchecks.io with proper metrics (i.e. tracking http requests duration, status codes, etc...). So we monitoring of the app in production is a lot easier. Ideally using which supports also traces. https://opentelemetry.io/docs/python/
|
closed
|
2021-06-15T07:10:14Z
|
2021-06-15T07:32:15Z
|
https://github.com/healthchecks/healthchecks/issues/527
|
[] |
jmichalek132
| 0 |
babysor/MockingBird
|
pytorch
| 60 |
补充教程
|
closed
|
2021-08-28T00:29:54Z
|
2021-08-29T12:05:14Z
|
https://github.com/babysor/MockingBird/issues/60
|
[] |
babysor
| 0 |
|
voila-dashboards/voila
|
jupyter
| 687 |
Error setting template paths; Voila fails to render
|
I'm running Voila to serve a dashboarding notebook to users via Jupyter Hub, using a proxy configuration as follows:
`c.ServerProxy.servers = { 'voila':
{ 'command': ['voila', '--debug', '--enable_nbextensions=True', '--MappingKernelManager.cull_interval=60', '--MappingKernelManager.cull_idle_timeout=120', '--MappingKernelManager.cull_busy=True', '--MappingKernelManager.cull_connected=True', '--no-browser', '--port', '{port}', '--base_url', '{base_url}voila/', '--server_url', '/', '/srv/voila/goldmine.ipynb']
}
}`
This has worked perfectly for many months.
After a recent pip upgrade, only 1 of my users is now able to successfully render the notebook page using Voila; everybody else receives a `jinja2.exceptions.TemplateNotFound: voila.tpl` error.
In comparing the --debug logs for successful and failed scenarios, it appears that the template paths are firstly set correctly (in all cases):
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: [Voila] **nbconvert template paths:
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/voila/templates/default/nbconvert_templates**
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: [Voila] **template paths:
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/voila/templates/default/templates**
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: [Voila] static paths:
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/voila/templates/default/static
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/lib/python3.6/site-packages/voila/static
But for sessions/users that fail, template paths is subsequently prepended with the user's ~/.local path before pre=processing:
**Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: [Voila] Template paths:
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates/html**
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/html
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter/nbconvert/templates/html
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter/nbconvert/templates/html
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/lab
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/base
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates/compatibility
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/compatibility
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter/nbconvert/templates/compatibility
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter/nbconvert/templates/compatibility
Of course, the default template does not exist under the first path (/home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates/html) and therefore rendering fails with the Jinja template not found issue.
This does not appear to be a permissions issue and nothing else that I'm aware of has changed system-wide.
async-generator==1.10
ipykernel==5.3.0
ipympl==0.5.6
ipysheet==0.4.4
ipython==7.14.0
ipython-genutils==0.2.0
ipytree==0.1.8
ipywidgets==7.5.1
Jinja2==2.11.2
jupyter-client==6.1.3
jupyter-core==4.6.3
jupyter-server==0.1.1
jupyter-server-proxy==1.2.0
jupyterhub==1.0.0
jupyterhub-idle-culler==1.0
jupyterlab==2.2.4
jupyterlab-pygments==0.1.1
jupyterlab-server==1.2.0
nbclient==0.4.3
nbconvert==5.6.1
nbformat==5.0.6
notebook==6.0.3
Pygments==2.6.1
tornado==6.0.4
traitlets==4.3.3
voila==0.1.22
widgetsnbextension==3.5.1
|
closed
|
2020-08-31T18:29:19Z
|
2020-09-02T16:48:52Z
|
https://github.com/voila-dashboards/voila/issues/687
|
[] |
dschofield
| 1 |
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,446 |
CGI Generic SQL Injection (blind)
|
If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
### Environment
Flask-Appbuilder version: `2.1.9`
### Describe the expected results
I am using Superset with Flask-AppBuilder with version stated above. While doing some scanning using Nessus, found that problem on `login` page.
### Describe the actual results
Tell us what happens instead.
```
Using the POST HTTP method, Nessus found that :
+ The following resources may be vulnerable to blind SQL injection :
+ The 'csrf_token' parameter of the /login/ CGI :
/login/ [username=&password=&csrf_token=ImQzYzFjYTZmMWQwMjMxNjcyMzQyOWI1
NGUwYzU1MzYwNTAzZWQ0YjQi.XxqOfw.Rdt9Egs2sOALP63VUCR2zqBKg5Ezz&password=&
csrf_token=ImQzYzFjYTZmMWQwMjMxNjcyMzQyOWI1NGUwYzU1MzYwNTAzZWQ0YjQi.XxqO
fw.Rdt9Egs2sOALP63VUCR2zqBKg5Eyy]
-------- output --------
<title>400 Bad Request</title>
<h1>Bad Request</h1>
<p>The CSRF tokens do not match.</p>
-------- vs --------
<title>400 Bad Request</title>
<h1>Bad Request</h1>
<p>The CSRF token is invalid.</p>
------------------------
```
### Steps to reproduce
|
closed
|
2020-07-28T01:16:34Z
|
2020-10-01T10:13:59Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1446
|
[
"question"
] |
syazshafei
| 5 |
Yorko/mlcourse.ai
|
plotly
| 720 |
Data for assignment 4
|
Thanks for the course. I've been working my way through it via cloning the repo off of Github. I can't seem to find the data set for assignment 4 on sarcasm detection. If it is indeed included, apologies; if not, how would you suggest to get it? Via Kaggle or some other means? Thanks again.
|
closed
|
2022-09-12T19:22:28Z
|
2022-09-13T23:01:54Z
|
https://github.com/Yorko/mlcourse.ai/issues/720
|
[] |
jonkracht
| 1 |
CorentinJ/Real-Time-Voice-Cloning
|
python
| 866 |
File Could not find
|
what about the file "vox1_meta.csv"
|
closed
|
2021-10-07T04:36:22Z
|
2021-10-07T05:16:57Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/866
|
[] |
xynulgm6020
| 1 |
NullArray/AutoSploit
|
automation
| 837 |
Divided by zero exception107
|
Error: Attempted to divide by zero.107
|
closed
|
2019-04-19T16:01:23Z
|
2019-04-19T16:37:35Z
|
https://github.com/NullArray/AutoSploit/issues/837
|
[] |
AutosploitReporter
| 0 |
polakowo/vectorbt
|
data-visualization
| 603 |
imageio: fps no longer supported, use duration instead
|
https://github.com/polakowo/vectorbt/blob/8c040429ac65d43ea431dc2789bf4787dd103533/vectorbt/utils/image_.py#LL74C1-L74C72
Clearly `vbt.save_animation` is still using `fps` as argument in `imageio.get_writer`, which doesn't support `fps` anymore,
One quick fix should be changing this line to
`with imageio.get_writer(fname, duration=fps // 5, **writer_kwargs) as writer:`
|
closed
|
2023-06-04T10:31:54Z
|
2024-03-16T10:45:28Z
|
https://github.com/polakowo/vectorbt/issues/603
|
[] |
xtfocus
| 2 |
automl/auto-sklearn
|
scikit-learn
| 1,650 |
[Question] Opinions on including SplineTransformer as feature preprocessing step
|
I was wondering if there were any plans of bringing the [SplineTransformer() preprocessor](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.SplineTransformer.html) into `auto-sklearn` (it became available in `sklearn` in a newer version that what is currently being used). I have been testing it recently as a custom component and it has been achieving great results for me although I am aware it is the type of preprocessor that could result in models with poor generalization capacity due to its nature.
Have you worked with this preprocessor before? What are your opinions about including it in an automated ML workflow such as `auto-sklearn`?
|
closed
|
2023-03-16T18:06:37Z
|
2023-04-17T21:30:28Z
|
https://github.com/automl/auto-sklearn/issues/1650
|
[] |
MrKevinDC
| 2 |
plotly/dash
|
jupyter
| 2,677 |
Dash.run return url with 404 when calling JupyterDash.infer_jupyter_proxy_config()
|
Hello,
I'm migrating several Jupyter Notebooks using dash.
It used to rely on https://pypi.org/project/jupyter-plotly-dash/ and I upgraded all python librairies and migrating in miniforge environment.
The notebooks are distributed through a jupyterhub, behind nginx as reverse proxy. Both nginx and jupyterhub are launched inside docker containers.
**Describe the bug**
I could not succeed to get Dash visible, even with simplest example.
1) if calling jupyter_dash.infer_jupyter_proxy_config() in this page : https://dash.plotly.com/dash-in-jupyter, I got a Jupyterhub 404 error. The provided url ([lab/user/fb03416l/proxy/8050/](https://HOST/lab/user/USER_ID/proxy/8050/) seems not to be correct.


2) it does not work either without calling this function, whatever jupytermode argument given to Dash.run() client tries to connect to local adress 127.0.0.1:PORT

Hoping that someone will be able to help me, thx a lot, please see below my configuration files.
**Describe your context**
docker-compose.yaml
```
services:
nginx:
container_name: 'nginx-service'
image: nginx:alpine
volumes:
- /appli/visuconti/visuconti_install/nginx.conf.template:/etc/nginx/templates/nginx.conf.template:ro
- /etc/nginx/ssl:/etc/nginx/ssl:ro
network_mode: host
environment:
- SERVER_NAME='servername'
restart: always
jupyterhub:
container_name: "jupyterhub-service"
build:
context: .
dockerfile: jupyterhub.Dockerfile
volumes:
- /appli/visuconti/visuconti_install/jupyterhub_config.py:/srv/jupyterhub/jupyterhub_config.py:ro
- /home:/home
network_mode: host
restart: always
```
nginx.conf.template
```
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name ${SERVER_NAME};
# Redirect the request to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443 ssl;
server_name ${SERVER_NAME};
ssl_certificate /etc/nginx/ssl/feevisu.crt;
ssl_certificate_key /etc/nginx/ssl/feevisu.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
#ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub frontend
location /lab {
proxy_pass http://0.0.0.0:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
jupyterhub.Dockerfile
```
FROM condaforge/miniforge3
ARG DEBIAN_FRONTEND=noninteractive
COPY environment.yaml ./environment.yaml
RUN mamba env create -f ./environment.yaml
# JUPYTER LAB BUILD
RUN conda run -n visuconti-env /bin/bash -c '\
jupyter lab clean && \
jupyter lab build'
RUN apt-get update && apt-get install -y \
sssd \
krb5-user \
net-tools \
sssd-tools \
sssd-dbus \
krb5-user \
krb5-locales \
libkrb5-26-heimdal
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "visuconti-env", "jupyterhub", "-f","/config/jupyterhub_config.py"]
```
environment.yaml
```
name: visuconti-env
dependencies:
- python=3.7
- jupyterlab=3.6.6
- jupyterhub=4.0.2
- dash=2.14.0
- numpy=1.21.6
- netCDF4=1.6.0
- pandas=1.3.5
- scipy=1.7.3
```
jupyterhub_config.py
```
import os
import grp
os.umask(0o002)
c.JupyterHub.admin_access = True
c.JupyterHub.authenticator_class = 'dummy' #'jupyterhub.auth.PAMAuthenticator'
c.JupyterHub.bind_url = 'http://:8000/lab'
c.JupyterHub.cleanup_proxy = True
c.JupyterHub.cleanup_servers = True
c.JupyterHub.hub_connect_ip = '127.0.0.1'
c.JupyterHub.hub_connect_url = 'http://127.0.0.1:12424'
c.JupyterHub.hub_ip = '127.0.0.1'
c.JupyterHub.hub_port = 12424
c.JupyterHub.reset_db = True
c.JupyterHub.spawner_class = 'jupyterhub.spawner.LocalProcessSpawner'
c.Spawner.debug = True
c.Spawner.default_url = '/lab'
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL']
c.Authenticator.allowed_users = {'fb03416l'}
c.Authenticator.delete_invalid_users = True
c.Authenticator.enable_auth_state = False
c.LocalAuthenticator.create_system_users = True
```
|
closed
|
2023-10-27T16:34:35Z
|
2025-03-20T09:18:48Z
|
https://github.com/plotly/dash/issues/2677
|
[
"bug",
"P3"
] |
ferdinandbayard
| 2 |
vllm-project/vllm
|
pytorch
| 14,651 |
[Usage]: how to cache the lora adapter in memory
|
### Your current environment
I want to build a multi-lora service, but the following code seems to reload the Lora adapter every time
```python
class LoraModule(BaseModel):
name: str
path: str
class UserRequest(BaseModel):
lora_module: list[LoraModule]
question: str
@app.post("/")
async def multi_loras(req: UserRequest):
params = SamplingParams(max_tokens=512)
tokenizer = await engine.get_tokenizer()
messages = tokenizer.apply_chat_template(
[{"role": "user", "content": req.question}],
tokenize=False,
add_generation_prompt=True,
)
output = []
for i, lora in enumerate(req.lora_module):
generator = engine.generate(
messages,
sampling_params=params,
lora_request=LoRARequest(
lora_name=lora.name,
lora_path=lora.path,
lora_int_id=i,
),
request_id=str(uuid4().hex),
)
final_output = None
async for res in generator:
final_output = res
output.append(final_output)
print(output)
```
### How would you like to use vllm
I noticed in the documentation that the service started via CLI seems to cache the lora adapter in memory, but I didn't find the code to implement it. Can you tell me where to implement it?
```shell
vllm serve meta-llama/Llama-2-7b-hf \
--enable-lora \
--lora-modules sql-lora=$HOME/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c/
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
closed
|
2025-03-12T02:23:40Z
|
2025-03-12T05:49:53Z
|
https://github.com/vllm-project/vllm/issues/14651
|
[
"usage"
] |
estuday
| 1 |
deepset-ai/haystack
|
nlp
| 8,953 |
Use new utility method `select_streaming_callback` in all ChatGenerators
|
As we have added `run_async` methods to our ChatGenerators we brought over a useful utility method https://github.com/deepset-ai/haystack/blob/209e6d5ff0f30f0be1774045de2491272bd2bdc2/haystack/dataclasses/streaming_chunk.py#L32-L34
which checks the compatibility of the streaming callback with the async or non-async run method.
We should make sure to use this to all of our ChatGenerators. It's currently only been added to HuggingFaceAPIChatGenerator (both run and run_async methods) and the OpenAIChatGenerator (only the run_async method)
|
open
|
2025-03-04T08:24:24Z
|
2025-03-21T07:01:46Z
|
https://github.com/deepset-ai/haystack/issues/8953
|
[
"P2"
] |
sjrl
| 0 |
ultralytics/ultralytics
|
machine-learning
| 19,082 |
nms=true for exporting to onnx
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
i get this error
```
(yolo) root@workstation-016:/mnt/4T/Tohidi/object_detector_service# yolo export model=yolo11
x.pt nms=true format=engine device=3
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:3 (NVIDIA H100 PCIe, 80995MiB)
YOLO11x summary (fused): 464 layers, 56,919,424 parameters, 0 gradients, 194.9 GFLOPs
Traceback (most recent call last):
File "/opt/anaconda3/envs/yolo/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/cfg/__init__.py",
line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/model.py",
line 740, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 354, in __call__
y = NMSModel(model, self.args)(im) if self.args.nms and not coreml else model(im)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 1559, in forward
extra_shape = pred.shape[-1] - (4 + self.model.nc) # extras from Segment, OBB, Pose
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1931, in __getattr__
raise AttributeError(
AttributeError: 'DetectionModel' object has no attribute 'nc'
```
****
### Environment
```
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:0 (NVIDIA H100 80GB HBM3, 80995MiB)
Setup complete ✅ (255 CPUs, 1007.7 GB RAM, 1807.6/1831.2 GB disk)
OS Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.0
Install pip
RAM 1007.65 GB
Disk 1807.6/1831.2 GB
CPU AMD EPYC 7773X 64-Core Processor
CPU count 255
GPU NVIDIA H100 80GB HBM3, 80995MiB
GPU count 6
CUDA 12.4
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
yolo export model=yolo11x.pt format=engine device=3 nms=true
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2025-02-05T12:00:05Z
|
2025-02-06T02:43:54Z
|
https://github.com/ultralytics/ultralytics/issues/19082
|
[
"bug",
"fixed",
"exports"
] |
mohamad-tohidi
| 2 |
paperless-ngx/paperless-ngx
|
django
| 8,811 |
[BUG] Process mail doesn't work as non-superuser
|
### Description
I tried the now Process Mail button as added with v2.14.
https://github.com/paperless-ngx/paperless-ngx/pull/8466
- It works when the button is pressed by a superuser.
- It doesn't work when it's pressed by a non-superuser (despite having admin and all mail permissions).
It shows an 403 Forbidden error
```json
{
"headers": {
"normalizedNames": {},
"lazyUpdate": null
},
"status": 403,
"statusText": "Forbidden",
"url": "http://192.168.2.194:8010/api/mail_accounts/1/process/",
"ok": false,
"name": "HttpErrorResponse",
"message": "Http failure response for http://192.168.2.194:8010/api/mail_accounts/1/process/: 403 Forbidden",
"error": {
"detail": "You do not have permission to perform this action."
}
}
```

### Steps to reproduce
1. Login as non-superuser
2. Mail -> Process Mail
3. Error 403 Forbidden
### Webserver logs
```bash
webserver-1 | [2025-01-19 16:20:10,986] [WARNING] [django.request] Forbidden: /api/mail_accounts/1/process/
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.4
### Host OS
Ubuntu 24.04.1/docker compose
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.14.4",
"server_os": "Linux-6.8.0-51-generic-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 23002126852096,
"available": 15696873656320
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "mfa.0003_authenticator_type_uniq",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-01-19T16:16:06.481368+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-01-19T15:05:27.104617Z",
"classifier_error": null
}
}
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description.
|
closed
|
2025-01-19T15:26:24Z
|
2025-02-19T03:07:47Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/8811
|
[
"not a bug"
] |
alexhk
| 6 |
dask/dask
|
numpy
| 11,701 |
`bind` parameter for preserving keys of the regenerated nodes
|
The docstring of [bind](https://docs.dask.org/en/stable/graph_manipulation.html#dask.graph_manipulation.bind) mentions regarding the `returns`:
> The keys of the regenerated nodes will be different from the original ones, so that they can be used within the same graph.
As mentioned in https://github.com/dask/dask/issues/9333, this may be inconvenient if the input `children` already have set `dask_key_name`. As @crusaderky [wrote](https://github.com/dask/dask/issues/9333#issuecomment-1215758430), this works as intended because it's designed so that you can use the original and the bound keys together. That's perfectly reasonable as a default but in my use case I need only the bound keys; they effectively replace the original ones. So it would be nice if `bind` takes a `regenerate_keys=True` optional parameter to allow preserving the original keys.
In the meantime, what's a manual way to to restore the original keys? I tried setting `._key` but apparently that's not enough; the graph still refers to the regenerated names.
|
closed
|
2025-01-25T18:16:46Z
|
2025-01-27T19:04:57Z
|
https://github.com/dask/dask/issues/11701
|
[
"needs triage"
] |
gsakkis
| 7 |
Yorko/mlcourse.ai
|
plotly
| 703 |
patreon payment
|
Hi, I paid the $17 for the bonus assignment, but I have no way to access it. Please help.
|
closed
|
2022-03-16T08:40:31Z
|
2022-03-16T19:07:14Z
|
https://github.com/Yorko/mlcourse.ai/issues/703
|
[] |
vahuja4
| 1 |
davidsandberg/facenet
|
computer-vision
| 808 |
The extracted feature is not the same when run validate code twice
|
Hi, David,
Thanks a lot for sharing this repo, I notice that when extract the feature on lfw, the feature is not the same on when I run the code twice, could you please tell me how this happened? should I crop the image size as a certain size?
|
open
|
2018-07-09T18:05:23Z
|
2018-07-09T18:05:23Z
|
https://github.com/davidsandberg/facenet/issues/808
|
[] |
xingdi1990
| 0 |
ploomber/ploomber
|
jupyter
| 1,088 |
Feature Request: Inversion of Control features should be supported in notebooks
|
We're working on adopting ploomber as our pipeline management technology. In early experimentation, I've found that many of the best inversion of control features of ploomber don't seem to be supported for notebooks. I find this odd because of the amount of attention and ink spent on integrating jupyter notebooks.
Examples (in order of importance):
- Serializer and deserializer don't appear to be supported for notebook upstreams and products. They seem to always be paths that the notebook author must handle.
- Clients don't appear to be supported for notebooks. It's only possible to manually instantiate them in the notebook.
- Injection substitutes absolute paths. This results in multiple editors accidentally fighting over the upstream and product cells in source control even if no real edits are made to the notebook.
The extensions you've added to make a jupyter[lab] server work well with plain 'ol .py files are very useful but I was disappointed in the small subset of features available to notebook tasks. This breaks the most powerful features of ploomber when using notebooks. Pure python tasks can use clients and serializers to improve testability and make large changes possible with tiny reliable changes to the pipeline spec. You can develop using human-readable formats and the local filesystem and then use binary formats and cloud storage in production with a couple of lines of yaml when using pure python but this is not possible with notebooks. Further, ploomber teaches certain concepts and expectations around upstreams and products when using python tasks that are not valid when using notebooks.
Suggestion: abstract upstream and product into python objects you import instead of injecting dictionaries of strings into notebooks.
```python
# %% tags=["parameters"]
# add default values for parameters here
# %% tags=["injected-parameters"]
# Parameters
upstream = {
"input": "\some\wild\absolute-path\input.csv"
}
product = {
"data": "\some\wild\absolute-path\data.csv",
"nb": "\some\wild\absolute-path\nb.csv",
}
# %%
df = pandas.read_csv(upstream['input'])
result = do_some_stuff(df)
result.to_csv(product['data'])
```
could become:
```python
# %% tags=["parameters"]
# add default values for parameters here
upstream, product = {}, {}
# %% tags=["injected-parameters"]
# Parameters
upstream, product = ploomber.nb.get_context() # knows the current state of the pipeline and uses it to populate upstream and product
# %%
df = upstream['input'] # deserializer and client populate the object instead of the path
result = do_some_stuff(df)
product['data'] = result # serializer and client encode and store the result instead of the notebook doing it using a path
```
|
open
|
2023-03-29T22:58:38Z
|
2023-03-30T00:16:27Z
|
https://github.com/ploomber/ploomber/issues/1088
|
[] |
marr75
| 3 |
bauerji/flask-pydantic
|
pydantic
| 71 |
Raise classical Pydantic ValidationError like FastApi
|
Hello,
I'm working with this library and I found the option to raise errors (`FLASK_PYDANTIC_VALIDATION_ERROR_RAISE = True`).
I was expecting the same kind of error as in FastAPI/Pydantic combination:
```json
{
"errors":[
{
"loc":[
"query",
"request-mode"
],
"msg":"field required",
"type":"value_error.missing"
},
{
"loc":[
"body",
"birth_date"
],
"msg":"field required",
"type":"value_error.missing"
}
]
}
```
In Pydantic, all errors are in the `errors` array and the location (header, body...) is specified directly in "loc".
In Flask-Pydantic, errors are in separate folders according to the location:
```json
{
"body":[
{
"loc":[
"birth_date"
],
"msg":"field required",
"type":"value_error.missing"
}
],
"query":[
{
"loc":[
"request-mode"
],
"msg":"field required",
"type":"value_error.missing"
}
]
}
```
The `ValidationError(BaseFlaskPydanticException)` exception `e` is raised and you can look for each group errors according to the location:
- `e.body_params`
- `e.form_params`
- `e.path_params`
- `e.query_params`
What I would like is, for instance, to add the `e.errors` category which contains all the errors, formatted as in the Pydantic library used by FastAPI.
Thank you!
|
open
|
2023-04-20T13:09:20Z
|
2023-04-20T13:28:55Z
|
https://github.com/bauerji/flask-pydantic/issues/71
|
[] |
Merinorus
| 1 |
pytorch/pytorch
|
numpy
| 149,302 |
Reduce memory consumption in broadcast bmm
|
### 🚀 The feature, motivation and pitch
Here is a minimal example, consumes about 66GiB CUDA memory (I guess it may expand `b` to [8192,32,1024,128] before calculation). Is it possible to reduce the memory consumption without expanding?
`a=torch.rand((8192,32,1,1024),dtype=torch.bfloat16,device='cuda:0')`
`b=torch.rand((1,32,1024,128),dtype=torch.bfloat16,device='cuda:0')`
`c=torch.matmul(a,b)`
Versions:
torch: 2.6.0+cu126
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
|
closed
|
2025-03-17T07:58:39Z
|
2025-03-19T07:53:31Z
|
https://github.com/pytorch/pytorch/issues/149302
|
[
"module: cuda",
"module: memory usage",
"triaged",
"module: linear algebra",
"needs design",
"matrix multiplication"
] |
zheyishine
| 4 |
ScrapeGraphAI/Scrapegraph-ai
|
machine-learning
| 425 |
SearchGraph error while follwing the example
|
```
from search_graph import SearchGraph
# Define the prompt and configuration
prompt = "What is Chioggia famous for?"
config = {
"llm": {"model": "gpt-3.5-turbo"}
}
# Create the search graph
search_graph = SearchGraph(prompt, config)
# Run the search graph
result = search_graph.run()
print(result)
```
this gives this error even I didn't modified the code. any idea how to fix this error?
```
Exception has occurred: OutputParserException
Invalid json output: {
"answer": {
"Chioggia is famous for offering a more authentic Italian experience compared to Venice, having a significant fishing industry and a rich history tied to the Venetian Republic, featuring a must-visit fish market with a wide variety of fresh seafood, and providing an excellent dining experience at Baia dei Porci with a focus on local seafood dishes."
}
}
json.decoder.JSONDecodeError: Expecting ':' delimiter: line 4 column 5 (char 382)
The above exception was the direct cause of the following exception:
File "/home/dongwook/Project/auto_crawl/toy.py", line 18, in <module>
raw_result = search_graph.run()
langchain_core.exceptions.OutputParserException: Invalid json output: {
"answer": {
"Chioggia is famous for offering a more authentic Italian experience compared to Venice, having a significant fishing industry and a rich history tied to the Venetian Republic, featuring a must-visit fish market with a wide variety of fresh seafood, and providing an excellent dining experience at Baia dei Porci with a focus on local seafood dishes."
}
}
```
|
closed
|
2024-07-01T11:47:54Z
|
2025-02-17T21:28:03Z
|
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/425
|
[] |
dwk601
| 7 |
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 651 |
训练精度问题
|
博主:
您好!
我用自己的数据去训练U2net,显示“[epoch: 0] val_MAE: 0.067 val_maxF1: 0.000
Epoch: [1] [ 0/48] eta: 0:05:28 lr: 0.000511 loss: 1.3729 (1.3729) time: 6.8397 data: 6.2014 max mem: 3338”
这是什么问题?然后预测的时候预测图全为黑色。
加载数据的时候,可以加载tif格式的数据吗?
感谢您的分享!
期待你的回答!
|
closed
|
2022-09-29T03:07:33Z
|
2022-11-20T03:37:28Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/651
|
[] |
adminmyw
| 3 |
FactoryBoy/factory_boy
|
django
| 825 |
Providing a `django.mute_signals` like mechanism for SQLAlchemy
|
#### The problem
SQLAlchemy provides a listener mechanism similar to django signals in order to execute code based on events handler like pre_save or post_save. Some models can be plugged in with this concept and code are executed `before`/`after` `insert/update/delete` events. Sometime, those pieces of code are not relevant to run into the factory scope and we want something able to unplug it on the fly.
#### Proposed solution
In the flavour of `django.mute_signals` provides a context_manager/decorator able to mute SQLAlchemy listeners.
based on this kind of declaration
```python
class ModelWithListener(Model):
id = ...
@listen_for(ModelWithListener, 'after_insert'):
def unwanted_function(...):
...
```
```python
with mute_listeners([ModelWithListener, 'after_insert', unwanted_function]):
ModelWithListenerFactory()
```
or
```python
@mute_listeners([(ModelWithListener, 'after_insert', unwanted_function)])
def test():
ModelWithListenerFactory()
```
We could easily imagine an option attribute already declared into the Meta of the factory
```python
class ModelWithListenerFactory(Factory):
class Meta:
model = ModelWithListener
sqlalchemy_mute_listeners = [
('after_insert', unwanted_function)
]
```
|
open
|
2020-12-04T14:03:29Z
|
2020-12-04T15:57:06Z
|
https://github.com/FactoryBoy/factory_boy/issues/825
|
[
"Feature",
"SQLAlchemy"
] |
moumoutte
| 1 |
django-oscar/django-oscar
|
django
| 3,738 |
Caching when no search results found - 2.1.1
|
I am using python 3.6, oscar 2.1.1 (django 2.2) with haystack + solr 8, cache is memcached 1.6.9. Didnt fork search app. Caching per site is not turned on. Tried to switch to default cache - disable memcached, didnt help. Settings:
```
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '172.11.22.33:11211',
}
}
```
modified facet:
```
OSCAR_SEARCH_FACETS = {
'fields': OrderedDict([
# ('product_class', {'name': _('Type'), 'field': 'product_class'}),
('rating', {'name': _('Rating'), 'field': 'rating'}),
]),
'queries': OrderedDict([
('price_range',
{
'name': _('Price range'),
'field': 'price',
'queries': [
# This is a list of (name, query) tuples where the name will
# be displayed on the front-end.
(_('0 to 20'), u'[0 TO 20]'),
(_('20 to 40'), u'[20 TO 40]'),
(_('40 to 60'), u'[40 TO 60]'),
(_('60+'), u'[60 TO *]'),
]
}),
]),
}
```
options:
```
'loaders': [
('django.template.loaders.cached.Loader', [
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
]),
],
```
### Steps to Reproduce
1. I have lot of 'cbd' products. When I search for 'cbd', they are correctly found.
2. When I search for 'cbw' (/?s=cbw), page with 0 results is rendered.
3. After that when I try to search for 'cbd', same template for 'cbw' is rendered again: the new searched text is correctly replaced in url (/?s=cbd), but part of template with results stands the same (Produits correspondant à "cbw": 0 résultat trouvé: seems to be cached. Even when I delete (?s=cbd) from url, page /search/ is still cached with 'Produits correspondant à "cbw": 0 résultat trouvé'.
|
closed
|
2021-07-23T15:25:08Z
|
2021-07-29T14:04:05Z
|
https://github.com/django-oscar/django-oscar/issues/3738
|
[] |
xplsek03
| 1 |
kynan/nbstripout
|
jupyter
| 98 |
Add `--global` option to `--install` to save filter config to `global .gitconfig`
|
Presently, `nbstripout --install` modifies the repo `.git/config`. This is less portable than saving the path to `nbstripout` and the filters in the user's global `.gitconfig`.
It would be nice to have a command such as:
`nbstripout --install --global --atributes .gitattributes` that created a `.gitattributes` file in the current repo but saved the filters and path globally. Then, every repo with the `.gitattributes` file would be stripped without needing to install nbstripout in every cloned repository.
See conversation #7
|
closed
|
2019-05-30T20:23:19Z
|
2019-07-22T22:53:16Z
|
https://github.com/kynan/nbstripout/issues/98
|
[
"type:enhancement",
"resolution:fixed"
] |
jraviotta
| 6 |
MagicStack/asyncpg
|
asyncio
| 772 |
How to do more than one cleanup task in cancelled state?
|
<!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.23.0
* **PostgreSQL version**: 12.6
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: no/ yes
* **Python version**:3.9.5
* **Platform**: Fedora Linux
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**: n/a
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: have only tested w/ asyncio directly
We have a user illustrating the case that a task being cancelled will allow us to reach a finally: block where we can at most run only one awaitable cleanup task on asyncpg in order to close out the connection. if we have more than one thing to await, such as emitting a ROLLBACK or anything else, we don't get the chance to close() the connection. It then seems to go into some place where we no longer have any reference to this connection yet asyncpg still leaves it opened; our own GC handlers that are supposed to take care of this are never called.
One way to illustrate it is the use case in such a way that indicates how I'm looking for "how to solve this problem?", of having two separate asycnpg connections that suppose we are doing some kind of work on separately in the same awaitable. if a cancel() is called, I can reach the finally: block, and I can then close at most one of the connections, but not both. in the real case, we are using only one connection but we are trying to emit a ROLLBACK and also do other awaitable things before we get to the .close().
What I dont understand is why gc isn't collecting these connections or why they aren't getting closed.
```python
import asyncio
from asyncio import current_task
import asyncpg
async def get_and_cancel():
c1 = await asyncpg.connect(
user="scott", password="tiger", host="localhost", database="test"
)
c2 = await asyncpg.connect(
user="scott", password="tiger", host="localhost", database="test"
)
try:
r1 = await c1.fetch("SELECT 1")
r2 = await c2.fetch("SELECT 1")
current_task().cancel()
finally:
# we get here...
# this seems to affect the asyncpg connection, the await is
# honored....
await c1.close()
# but we never get here. connection leaks. canonical way to
# solve this issue?
await c2.close()
async def main():
while True:
try:
await get_and_cancel()
except asyncio.exceptions.CancelledError:
pass
asyncio.run(main())
```
the stack trace is that we've run out of connections:
```
Traceback (most recent call last):
File "/home/classic/dev/sqlalchemy/test4.py", line 39, in <module>
asyncio.run(main())
File "/usr/lib64/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib64/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/classic/dev/sqlalchemy/test4.py", line 34, in main
await get_and_cancel()
File "/home/classic/dev/sqlalchemy/test4.py", line 11, in get_and_cancel
c2 = await asyncpg.connect(
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connection.py", line 1981, in connect
return await connect_utils._connect(
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connect_utils.py", line 732, in _connect
con = await _connect_addr(
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connect_utils.py", line 632, in _connect_addr
return await __connect_addr(params, timeout, True, *args)
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connect_utils.py", line 682, in __connect_addr
await compat.wait_for(connected, timeout=timeout)
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/compat.py", line 103, in wait_for
return await asyncio.wait_for(fut, timeout)
File "/usr/lib64/python3.9/asyncio/tasks.py", line 481, in wait_for
return fut.result()
asyncpg.exceptions.TooManyConnectionsError: remaining connection slots are reserved for non-replication superuser connections
```
|
closed
|
2021-06-18T13:36:12Z
|
2021-06-19T00:13:19Z
|
https://github.com/MagicStack/asyncpg/issues/772
|
[] |
zzzeek
| 6 |
jupyterlab/jupyter-ai
|
jupyter
| 1,277 |
Chat Interface Throws Error When Model Provider is Ollama but Works in Notebook
|
## Description
I'm working on a blogpost on Jupyter AI and I had completed the draft.
Article Draft : https://docs.google.com/document/d/1N59WnVCDOzFX2UdfPW_G-eRet5AkCpNcJGbZ5uXw6HI/edit?usp=sharing
Everything was working seamlessly as can be seen from the screenshots in the article. The Ollama integration in Jupyter AI worked as expected in both notebooks and the chat interface. However, now the chat interface throws an error, while the notebook-based interactions still function correctly.
## Environment Details
- **OS**: macOS 14
- **Python Version**: 3.13.1
- **JupyterLab Version**: 4.3.6
- **Jupyter AI Version**: 2.30.0
## Steps to Reproduce
<img width="1144" alt="Image" src="https://github.com/user-attachments/assets/e12e0caa-9c25-49aa-b450-d91237bff391" />
<img width="657" alt="Image" src="https://github.com/user-attachments/assets/e3a2ed90-ff03-4d2e-8c56-35d4dfe41df0" />
## Error Message
Traceback (most recent call last):
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/jupyter_ai/chat_handlers/base.py", line 229, in on_message
await self.process_message(message)
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/jupyter_ai/chat_handlers/default.py", line 72, in process_message
await self.stream_reply(inputs, message)
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/jupyter_ai/chat_handlers/base.py", line 567, in stream_reply
async for chunk in chunk_generator:
...<32 lines>...
break
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5548, in astream
async for item in self.bound.astream(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5548, in astream
async for item in self.bound.astream(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3439, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3422, in atransform
async for chunk in self._atransform_stream_with_config(
...<5 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3392, in _atransform
async for output in final_pipeline:
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5584, in atransform
async for item in self.bound.atransform(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 4954, in atransform
async for output in self._atransform_stream_with_config(
...<5 lines>...
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 4935, in _atransform
async for chunk in output.astream(
...<7 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5548, in astream
async for item in self.bound.astream(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3439, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3422, in atransform
async for chunk in self._atransform_stream_with_config(
...<5 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3392, in _atransform
async for output in final_pipeline:
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/output_parsers/transform.py", line 85, in atransform
async for chunk in self._atransform_stream_with_config(
...<2 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2266, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/utils/aiter.py", line 74, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/utils/aiter.py", line 123, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 1473, in atransform
async for output in self.astream(final, config, **kwargs):
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 512, in astream
async for chunk in self._astream(
...<14 lines>...
generation += chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py", line 755, in _astream
async for stream_resp in self._acreate_chat_stream(messages, stop, **kwargs):
...<23 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py", line 575, in _acreate_chat_stream
async for part in await self._async_client.chat(**chat_params):
yield part
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/ollama/_client.py", line 672, in inner
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: model is required (status code: 400)
Would appreciate any insights from the maintainers. Thanks!
|
open
|
2025-03-17T18:10:53Z
|
2025-03-19T18:40:17Z
|
https://github.com/jupyterlab/jupyter-ai/issues/1277
|
[
"bug"
] |
parulnith
| 7 |
autokey/autokey
|
automation
| 39 |
Migrate wiki from code.google.com archives
|
closed
|
2016-11-08T05:30:24Z
|
2016-11-08T06:05:22Z
|
https://github.com/autokey/autokey/issues/39
|
[] |
troxor
| 1 |
|
matterport/Mask_RCNN
|
tensorflow
| 2,502 |
Is resizing of the input image done in both training and predicting?
|
In the hyperparameters, the standard settings for input image resizing are as below:
`IMAGE_RESIZE_MODE = "square"`
`IMAGE_MIN_DIM = 800`
`IMAGE_MAX_DIM = 1024`
I have input images that are 6080x3420, so to my understanding, these are resized to 1024x1024 and padded with zeroes to make a square image. Does this happen both in training and when predicting with the trained model?
I ask because I have a model trained on the 6080x3420 images with the above standard settings, but I have noticed that downscaling the test images before predicting has an influence on prediction accuracy. Effectively, the prediction accuracy is highest when downscaling the test images to 12.5% of the original size before running the model on them.
|
open
|
2021-03-09T16:39:29Z
|
2021-03-09T16:39:29Z
|
https://github.com/matterport/Mask_RCNN/issues/2502
|
[] |
TECOLOGYxyz
| 0 |
microsoft/nni
|
deep-learning
| 5,161 |
hpo 远程问题
|
**Describe the issue**:
HPO 远程运行的例子在哪里可以找到呢
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2022-10-15T12:34:32Z
|
2022-10-15T12:36:27Z
|
https://github.com/microsoft/nni/issues/5161
|
[] |
LS11111
| 0 |
Miserlou/Zappa
|
django
| 1,412 |
Set PYTHON_EGG_CACHE for flask apps during init
|
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
I discovered that in my Flask deployment, the app deploys fine with e.g. `zappa init; zappa deploy dev` however upon hitting the generated endpoint a failure is returned.
## Expected Behavior
<!--- Tell us what should happen -->
You should be able to get your expected response from whatever endpoint is hit.
## Actual Behavior
<!--- Tell us what happens instead -->
You get this response:
```
"{u'message': u'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', u'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 452, in handler\\n response = Response.from_app(self.wsgi_app, environ)\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/wrappers.py\", line 903, in from_app\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/wrappers.py\", line 57, in _run_wsgi_app\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/test.py\", line 884, in run_wsgi_app\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}"
```
`zappa tail dev` yields the following:
```
[1519342540529] Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 30] Read-only file system: '/home/sbx_user1060'
The Python egg cache directory is currently set to:
/home/sbx_user1060/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Seems that PYTHON_EGG_CACHE needs to be set as an environment variable to '/tmp'. I solved by including the following in my zappa_settings.json:
```json
"environment_variables": {
"PYTHON_EGG_CACHE": "/tmp"
}
```
Unsure if this is Flask specific, or if I stuffed up somewhere, or if this is actually expected behaviour...
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Make a flask app
2. `zappa init`
3. `zappa deploy dev`
4. poke API endpoint
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: 4.13.0-32-generic #35~16.04.1-Ubuntu | Python 2.7.12
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
```json
{
"dev": {
"app_function": "*****.api.API",
"aws_region": "ap-southeast-2",
"profile_name": "*****",
"project_name": "api",
"runtime": "python2.7",
"s3_bucket": "*****",
"environment_variables": {
"*****": "*****",
"PYTHON_EGG_CACHE": "/tmp"
},
"domain": "*****.*****",
"cors": true,
"certificate_arn": "arn:aws:acm:us-east-1:*******"
}
```
|
open
|
2018-02-23T01:18:21Z
|
2018-03-07T23:42:09Z
|
https://github.com/Miserlou/Zappa/issues/1412
|
[
"bug",
"enhancement"
] |
L226
| 2 |
tensorpack/tensorpack
|
tensorflow
| 947 |
Multiple-Input Batched Data
|
I'm trying to feed a RNN+LSTM Network using the `KerasModel` layer, and in my case, I need to use 2 inputs, and get a `Dense(1)` output.
```python
train, test, val = dataset.get('train'), dataset.get('test'), dataset.get('val')
train_ds = BatchData(train, batch_size, use_list=True)
test_ds = BatchData(test, batch_size, use_list=True)
M = KerasModel(create_model,
inputs_desc=[
InputDesc(tf.float32, [
None, timesteps_val, len(features_per_timestep)], 'input_a'),
InputDesc(tf.float32, [
None, timesteps_val, 96, 96, 1], 'input_b'),
],
targets_desc=[InputDesc(tf.float32, [None, 1], 'labels')],
input=QueueInput(train_ds))
```
If I remove the `use_list=True`, I got an error saying that batched data only works in Numpy Arrays. If I remove batching, or keep the way that its above, I got:
```
[1024 15:03:14 @input_source.py:168] ERR Exception in EnqueueThread QueueInput/input_queue:
Traceback (most recent call last):
File "/Users/brunoalano/.local/share/virtualenvs/research-laYaeRqi/lib/python3.6/site-packages/tensorpack/input_source/input_source.py", line 159, in run
feed = _make_feeds(self.placehdrs, dp)
File "/Users/brunoalano/.local/share/virtualenvs/research-laYaeRqi/lib/python3.6/site-packages/tensorpack/input_source/input_source.py", line 41, in _make_feeds
len(datapoint), len(placeholders))
AssertionError: Size of datapoint and placeholders are different: 2 != 3
```
Details:
- Python Version: Python 3.6.5
- TF Version: v1.11.0-rc2-4-gc19e29306c 1.11.0
- Tensorpack Version (from git): 0.8.9
|
closed
|
2018-10-24T18:08:08Z
|
2018-11-27T04:17:21Z
|
https://github.com/tensorpack/tensorpack/issues/947
|
[
"unrelated"
] |
brunoalano
| 2 |
jmcnamara/XlsxWriter
|
pandas
| 294 |
Feature request: Watermark support
|
Hi,
I was looking in documentation and couldn't find any clue how to set background image in worksheet. I want to add watermark to printed pages. Is this feature supported? If not, maybe any workaround exists?
|
closed
|
2015-09-01T09:06:05Z
|
2021-05-12T09:27:36Z
|
https://github.com/jmcnamara/XlsxWriter/issues/294
|
[
"question",
"ready to close"
] |
Valian
| 3 |
sczhou/CodeFormer
|
pytorch
| 308 |
ModuleNotFoundError: No module named 'facelib'
|
```
Traceback (most recent call last):
File "inference_codeformer.py", line 10, in <module>
from facelib.utils.face_restoration_helper import FaceRestoreHelper
ModuleNotFoundError: No module named 'facelib'
```
Then I install the library, but no suitable version is displayed, what should I do?
```
pip install facelib
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
Could not fetch URL https://pypi.tuna.tsinghua.edu.cn/simple/facelib/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Max retries exceeded with url: /simple/facelib/ (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))) - skipping ERROR: Could not find a version that satisfies the requirement facelib (from versions: none)
ERROR: No matching distribution found for facelib
```
|
closed
|
2023-09-25T14:25:43Z
|
2023-12-15T03:56:09Z
|
https://github.com/sczhou/CodeFormer/issues/308
|
[] |
FaShi-x
| 1 |
elliotgao2/toapi
|
flask
| 121 |
ImportError: cannot import name 'XPath'
|
XPath去哪啦???看起来你好像移除了xpath???

|
closed
|
2018-04-26T17:21:16Z
|
2021-12-25T05:37:50Z
|
https://github.com/elliotgao2/toapi/issues/121
|
[] |
sparkyvxcx
| 2 |
jonaswinkler/paperless-ng
|
django
| 185 |
[Docker] Please export port 8000 for plesk
|
Hi there, merry Christmas!
I am trying to install paperless-ng on a server using Plesk. It's working and I got it all running - but Plesk does not recognize the port you're exposing, which prevents me from actually accessing the service. I am using your docker image from https://hub.docker.com/r/jonaswinkler/paperless-ng
I found this article, where they describe in the comments what needs to be done
https://support.plesk.com/hc/en-us/articles/115003142213-Unable-to-add-Docker-Proxy-rules-in-Plesk-no-container-is-displayed
The way I see it, all we'd need would be an EXPOSE 8000 in the docker file. But I am no expert and just guessing wildly. Would be much appreciated!
Best,
Jens
|
closed
|
2020-12-24T11:57:21Z
|
2020-12-31T01:28:16Z
|
https://github.com/jonaswinkler/paperless-ng/issues/185
|
[
"fixed in next release"
] |
influjensbahr
| 2 |
harry0703/MoneyPrinterTurbo
|
automation
| 600 |
合作请求
|
### 是否已存在类似的功能请求?
- [x] 我已搜索现有的功能请求
### 痛点
老师,推荐开源项目 https://github.com/volcengine/ai-app-lab 合作,您看有兴趣吗?可以邮件沟通一下吗?[email protected]
### 建议的解决方案
老师,推荐开源项目 https://github.com/volcengine/ai-app-lab 合作,您看有兴趣吗?可以邮件沟通一下吗?[email protected]
### 有用的资源
_No response_
### 其他信息
_No response_
|
open
|
2025-03-05T04:31:24Z
|
2025-03-05T04:31:24Z
|
https://github.com/harry0703/MoneyPrinterTurbo/issues/600
|
[
"enhancement"
] |
ljg52603681
| 0 |
PokeAPI/pokeapi
|
api
| 733 |
Missing Array length
|
Hey,
I worked in C# with this API and it was verry cool. But i missed something.
It would be realy nice to have a field, wich shows the length of the following Array.
For example like this:
```
"types_length": 2,
"types": [
{
"slot": 1,
"type": {
"name": "grass",
"url": "https://pokeapi.co/api/v2/type/12/"
}
},
{
"slot": 2,
"type": {
"name": "poison",
"url": "https://pokeapi.co/api/v2/type/4/"
}
}
]
```
|
closed
|
2022-07-18T06:49:44Z
|
2022-10-26T13:51:52Z
|
https://github.com/PokeAPI/pokeapi/issues/733
|
[] |
bidery
| 2 |
mobarski/ask-my-pdf
|
streamlit
| 59 |
persintency among sessions
|
could it be possible to recover stored vector indexes across sessions (same API key ) at least within 90 days?
|
open
|
2023-08-23T20:44:33Z
|
2023-08-23T20:44:33Z
|
https://github.com/mobarski/ask-my-pdf/issues/59
|
[] |
carloscascos
| 0 |
home-assistant/core
|
asyncio
| 140,450 |
Higher CPU load after upgrading to 2025.3.2
|
### The problem
Hi
I see a higher CPU load after upgrading from 2025.2.5 to 2025.3.2 today at 8:15. Normally only around 1-2%, but now it is consistently between 3-4%.
I have tried stopping all add-ons, but no difference in CPU load.

### What version of Home Assistant Core has the issue?
2025.3.2
### What was the last working version of Home Assistant Core?
2025.2.5
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Nothing new
```
### Additional information
_No response_
|
closed
|
2025-03-12T13:19:28Z
|
2025-03-12T20:39:34Z
|
https://github.com/home-assistant/core/issues/140450
|
[
"needs-more-information"
] |
KennethGunther
| 4 |
streamlit/streamlit
|
data-visualization
| 10,041 |
Implement browser session API
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Browser sessions allow developers to track browser status in streamlit, so that they can implement features like authentication, persistent draft or shopping cart, which require the ability to keep user state after refreshing or reopen browsers.
### Why?
The current streamlit session will lost state if users refresh or reopen their browser. And the effort of providing a API to write cookies has been pending for years. I think provide a dedicated API to track browser session would be cleaner and easier to implement.
With this API developers don't need to know how it works, it can be based on cookie or local storage or anything else. And developers can use it with singleton pattern to keep state for browser to persist whatever they want in streamlit.
### How?
This feature will introduce several new APIs:
* `st.get_browser_session(gdpr_consent=False)`, which will set a unique session id in browser if it doesn't exist, and return it.
If `gdpr_consent` is set to True, a window will pop up to ask for user's consent before setting the session id.
* `st.clean_browser_session()`, which will remove the session id from browser.
The below is a POC of how `get_browser_session` can be used to implement a simple authentication solution:
```python
from streamlit.web.server.websocket_headers import _get_websocket_headers
from streamlit.components.v1 import html
import streamlit as st
from http.cookies import SimpleCookie
from uuid import uuid4
from time import sleep
def get_cookie():
try:
headers = st.context.headers
except AttributeError:
headers = _get_websocket_headers()
if headers is not None:
cookie_str = headers.get("Cookie")
if cookie_str:
return SimpleCookie(cookie_str)
def get_cookie_value(key):
cookie = get_cookie()
if cookie is not None:
cookie_value = cookie.get(key)
if cookie_value is not None:
return cookie_value.value
return None
def get_browser_session():
"""
use cookie to track browser session
this id is unique to each browser session
it won't change even if the page is refreshed or reopened
"""
if 'st_session_id' not in st.session_state:
session_id = get_cookie_value('ST_SESSION_ID')
if session_id is None:
session_id = uuid4().hex
st.session_state['st_session_id'] = session_id
html(f'<script>document.cookie = "ST_SESSION_ID={session_id}";</script>')
sleep(0.1) # FIXME: work around bug: Tried to use SessionInfo before it was initialized
st.rerun() # FIXME: rerun immediately so that html won't be shown in the final page
st.session_state['st_session_id'] = session_id
return st.session_state['st_session_id']
@st.cache_resource
def get_auth_state():
"""
A singleton to store authentication state
"""
return {}
st.set_page_config(page_title='Browser Session Demo')
session_id = get_browser_session()
auth_state = get_auth_state()
if session_id not in auth_state:
auth_state[session_id] = False
st.write(f'Your browser session ID: {session_id}')
if not auth_state[session_id]:
st.title('Input Password')
token = st.text_input('Token', type='password')
if st.button('Submit'):
if token == 'passw0rd!':
auth_state[session_id] = True
st.rerun()
else:
st.error('Invalid token')
else:
st.success('Authentication success')
if st.button('Logout'):
auth_state[session_id] = False
st.rerun()
st.write('You are free to refresh or reopen this page without re-authentication')
```
A more complicated example of using this method to work with oauth2 can be tried here: https://ai4ec.ikkem.com/apps/op-elyte-emulator/
### Additional Context
Related issues:
* https://github.com/streamlit/streamlit/issues/861
* https://github.com/streamlit/streamlit/issues/8518
|
open
|
2024-12-18T02:12:59Z
|
2025-01-06T15:40:04Z
|
https://github.com/streamlit/streamlit/issues/10041
|
[
"type:enhancement"
] |
link89
| 2 |
microsoft/qlib
|
machine-learning
| 1,750 |
The official RL example got error
|
Copy and run these code from official RL example, but got below errors, please help check, thanks.
https://github.com/microsoft/qlib/blob/main/examples/rl/simple_example.ipynb
Training started
/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/env/venvs.py:66: UserWarning: You provided an environment generator that returned an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Tianshou is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/data_queue.py:98: RuntimeWarning: After 1 cleanup, the queue is still not empty.
warnings.warn(f"After {repeat} cleanup, the queue is still not empty.", category=RuntimeWarning)
Traceback (most recent call last):
File "/Users/user/Desktop/ruc/paper/quant/Quant/rl/rl_example.py", line 166, in <module>
train(
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/trainer/api.py", line 63, in train
trainer.fit(vessel)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/trainer/trainer.py", line 224, in fit
self.vessel.train(vector_env)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/trainer/vessel.py", line 171, in train
collector = Collector(self.policy, vector_env, VectorReplayBuffer(self.buffer_size, len(vector_env)))
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/data/collector.py", line 80, in __init__
self.reset(False)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/data/collector.py", line 131, in reset
self.reset_env(gym_reset_kwargs)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/data/collector.py", line 147, in reset_env
obs, info = self.env.reset(**gym_reset_kwargs)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/finite_env.py", line 233, in reset
for i, o in zip(request_id, super().reset(request_id)):
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/env/venvs.py", line 280, in reset
assert (
AssertionError: The environment does not adhere to the Gymnasium's API.
Exception ignored in: <function DataQueue.__del__ at 0x12b9f6280>
Traceback (most recent call last):
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/data_queue.py", line 148, in __del__
self.cleanup()
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/data_queue.py", line 101, in cleanup
self._queue.get(block=False)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 111, in get
res = self._recv_bytes()
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError:
env
python: 3.8
qlib: 0.9.3
|
closed
|
2024-02-21T15:55:56Z
|
2024-03-28T15:31:41Z
|
https://github.com/microsoft/qlib/issues/1750
|
[
"bug"
] |
ghyzx
| 3 |
ijl/orjson
|
numpy
| 42 |
mypy can't find OPT_SERIALIZE_DATACLASS
|
For some reason mypy doesn't think that OPT_SERIALIZE_DATACLASS exists in the orjson module. I really don't know why this is, it's clearly defined and in the .pyi file, so maybe it's an issue with mypy? Figured I would post it here in case you know why.
My code:
```python
from dataclasses import dataclass
import orjson
@dataclass
class Test:
value: str
test = Test("hi")
print(orjson.dumps(test, option=orjson.OPT_SERIALIZE_DATACLASS))
```
mypy's output is " error: Module has no attribute "OPT_SERIALIZE_DATACLASS"
|
closed
|
2019-12-04T20:42:14Z
|
2019-12-06T21:21:23Z
|
https://github.com/ijl/orjson/issues/42
|
[] |
dbanty
| 4 |
aio-libs/aiopg
|
sqlalchemy
| 71 |
Migrate from "yield from" to await (TypeError: object Engine can't be used in 'await' expression)
|
Hi, i replaced in my code "yield from" to "await", and received Traceback:
"TypeError: object Engine can't be used in 'await' expression"
``` python
async def db_psql_middleware(app, handler):
async def middleware(request):
db = app.get('db_psql')
if not db:
app['db_psql'] = db = await create_engine(app['psql_dsn'], minsize=1, maxsize=5)
request.app['db_psql'] = db
return (await handler(request))
return middleware
async def psql_select(request):
with (await request.app['db_psql']) as conn:
result = await conn.execute(models.select())
```
Traceback
``` python
[2015-09-17 14:50:29 +0300] [26045] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/vvv/src/backend-tools/python/asyncio/venv35/lib/python3.5/site-packages/aiohttp/server.py", line 272, in start
yield from self.handle_request(message, payload)
File "/Users/vvv/src/backend-tools/python/asyncio/venv35/lib/python3.5/site-packages/aiohttp/web.py", line 85, in handle_request
resp = yield from handler(request)
File "/Users/vvv/src/backend-tools/python/asyncio/app.py", line 39, in middleware
return (await handler(request))
File "/Users/vvv/src/backend-tools/python/asyncio/app.py", line 46, in psql_select
with (await request.app['db_psql']) as conn:
TypeError: object Engine can't be used in 'await' expression
```
|
closed
|
2015-09-17T12:19:19Z
|
2018-05-17T20:20:57Z
|
https://github.com/aio-libs/aiopg/issues/71
|
[] |
vvv-v13
| 18 |
robinhood/faust
|
asyncio
| 376 |
Proposal: Support for Failure-topic forwarding
|
Failure handling is an area where there is limited consensus within the Kafka community. One option for Faust would be adding support for failure forwarding in the same pattern as sinks. The API might look like:
```python
# in faust.models.record
# new Faust Record type specifically for error handling
class FailureEventRecord(Record):
errormsg: str
exception: AgentException
failed_record: Record
# in app.py
topic = app.topic('my.event', value_type=MyRecord)
failure_topic = app.topic('my.event.failed', value_type=faust.FailureEventRecord)
@app.agent(topic, sinks=[sink1, sink2], failures=[failed_record_operation])
async def my_exception_agent(records):
with record in records:
raising_operation(record)
```
## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
Documentations says:
> "Crashing the instance to require human intervention is certainly a choice, but far from ideal considering how common mistakes in code or unexpected exceptions are. It may be better to log the error and have ops replay and reprocess the stream on notification."
## Expected behavior
Any events which fail stream processing in an agent will be wrapped in a `FailureEventRecord` and delivered to the topic or topics specified in the `failures` parameter to `app.agent()`.
# Versions
* Python 3.6
* Faust 1.7.0
* MacOS 10.14.5
* Kafka ?
* RocksDB version (if applicable)
|
open
|
2019-07-01T19:42:04Z
|
2022-03-10T20:51:58Z
|
https://github.com/robinhood/faust/issues/376
|
[] |
sivy
| 6 |
tensorpack/tensorpack
|
tensorflow
| 1,357 |
faster-rcnn performance
|
hi, wuyuxin
Thank you for sharing this wonderful project! Here I have some performance problems of faster-rcnn.
In coco dataset, experiments results show GN has better performance.
R50-FPN | 38.9;35.4
R50-FPN-GN | 40.4;36.3
However, in my dataset(ocr icdar2017), GN performance is poor than FreezeBN. Scratch performance is also poor than pre-trainend which is contrary to your conclusion. Here are the results on my dataset.
pre-train, resnet101, FreezeBN, freeze_AT=2, f-score:0.7742
pre-train, resnet101, GN, freeze_AT=2, f-score:0.7605
no pre-train(From Scratch), resnet101, GN, freeze_AT=0, f-score:0.7106
Hope for your suggestion.
|
closed
|
2019-10-30T13:21:31Z
|
2019-10-30T14:29:20Z
|
https://github.com/tensorpack/tensorpack/issues/1357
|
[] |
gulixin0922
| 1 |
pydantic/logfire
|
fastapi
| 224 |
Docs breadcrumbs usually start with 'Intro'
|
e.g. 'Intro > Legal'. This doesn't make sense.

|
closed
|
2024-05-30T12:49:03Z
|
2024-06-04T13:17:37Z
|
https://github.com/pydantic/logfire/issues/224
|
[
"documentation"
] |
alexmojaki
| 5 |
amidaware/tacticalrmm
|
django
| 1,354 |
linux agent disconnect after seconds
|
**Server Info :**
- OS: Ubuntu 20.04
- Browser: Chrome
- RMM Version (as shown in top left of web UI): 0.15.3
**Installation Method:**
- [x] Standard
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI): 2.4.2
- Agent OS: Ubuntu 20.04
**Describe the bug**
when i install linux agent app on my linux vps, it connect to RMM panel and show in this panel but after some seconds the vps will disconnect and i cant do anything with RMM on that VPS
how can i get some logs for debug the problem? or what shuld i do for fix the issue?
|
closed
|
2022-11-16T06:27:46Z
|
2022-11-16T06:33:35Z
|
https://github.com/amidaware/tacticalrmm/issues/1354
|
[] |
alez404
| 1 |
biolab/orange3
|
numpy
| 6,111 |
Prediction widget shows no probabilities
|
**Describe the bug**
The Prediction widget shows no probabilities for the predicted classes.
**To Reproduce**
**Expected behavior**
See probabilities for predicted classes in prediction Widget.
**Orange version:**
3.32
**Screenshots**

**Operating system:**
Windows 11
**Additional context**
|
closed
|
2022-08-29T08:51:48Z
|
2022-08-29T09:40:25Z
|
https://github.com/biolab/orange3/issues/6111
|
[] |
alexanderfussan
| 1 |
wagtail/wagtail
|
django
| 12,814 |
Wagtail choosers ignore Django's ForeignKey.limit_choices_to
|
### Issue Summary
If you set up a callable for a `ForeignKey.limit_choices_to`, It will get called by Django during form construction, but Wagtail's chooser system will never call that function, and thus not limit the choices presented in the chooser. This presumably also affects the other forms of `limit_choices_to` (see [here](https://docs.djangoproject.com/en/5.1/ref/models/fields/#django.db.models.ForeignKey.limit_choices_to)), but a callable is the easiest one to use as a robust example.
### Steps to Reproduce
1. Build a basic bakerydemo, then apply the changes from here: https://github.com/wagtail/bakerydemo/compare/main...coredumperror:bakerydemo:limit_choices_to
2. Open a BreadPage editor (which will cause `"limit_choices_to_bread"` to be printed to the console).
3. Choose a BreadType (which will _not_ cause that print statement to fire).
4. Every bread type is displayed in the chooser, instead of just ones with "bread" in their title.
### Technical details
Python version: 3.12
Django version: 5.1
Wagtail version: 6.4.0
### Working on this
Shouldn't this Django feature be supported by Wagtail's chooser system? It's quite surprising that my project's existing code, which worked fine when I wrote it years ago, has now stopped limiting the available options because Wagtail's new(ish) chooser system apparently just ignores it.
|
closed
|
2025-01-24T20:59:38Z
|
2025-01-24T22:55:13Z
|
https://github.com/wagtail/wagtail/issues/12814
|
[
"type:Bug",
"status:Unconfirmed"
] |
coredumperror
| 2 |
exaloop/codon
|
numpy
| 251 |
add benchmark vs mypyc ?
|
open
|
2023-03-16T07:01:36Z
|
2023-03-17T13:58:37Z
|
https://github.com/exaloop/codon/issues/251
|
[] |
JunyiXie
| 1 |
|
lucidrains/vit-pytorch
|
computer-vision
| 206 |
Question about code of `vit_for_small_dataset.py`
|
Hi,
Thanks for the outstanding efforts to bring ViT to the framework of PyTorch, which is meaningful for me to learn about it!
However, when I review the code in `vit_for_small_dataset.py`, in the `SPT`, I find code in the line `86` to be weird.
```python
patch_dim = patch_size * patch_size * 5 * channels
```
I fail to understand the meaning of `5` here. Is it a typo? Based on the context here, it seems that `patch_size * patch_size * channels` is more appropriate.
Thanks for your attention!
|
closed
|
2022-03-13T04:07:25Z
|
2022-03-13T23:32:41Z
|
https://github.com/lucidrains/vit-pytorch/issues/206
|
[] |
RaymondJiangkw
| 2 |
keras-team/keras
|
tensorflow
| 20,719 |
Bug in functional model
|
I think there seems a bug in functional model.
Case-1: When inputs=outputs for the model construction but training different output shape: Training success
```
import keras
from keras import layers
import numpy as np
input_1 = layers.Input(shape=(3,))
input_2 = layers.Input(shape=(5,))
model_1 = keras.models.Model([input_1, input_2], [input_1, input_2])
print(model_1.summary())
model_1.compile(optimizer='adam',metrics=['accuracy','accuracy'],loss=['mse'])
#Notice I am passing different output size for training but still training happens
model_1.fit([np.random.normal(size=(10,3)),np.random.normal(size=(10,5))],
[np.random.normal(size=(10,1)),np.random.normal(size=(10,2))])
print('Training completed')
```
Case 2: Same as Case-1 but different behavior with different mismatched output shapes(than case-1) for training: Error during loss calculation. But I expect Error during graph execution itself.
```
#With diffrent output shapes than model constructed its raising error while calculating the loss.
#Instead it should have raised shape mismatch error during graph execution.
model_1.fit([np.random.normal(size=(10,3)),np.random.normal(size=(10,5))],
[np.random.normal(size=(10,2)),np.random.normal(size=(10,4))])
```
Case 3: With Unconnected inputs and outputs
```
input_1 = layers.Input(shape=(3,))
input_2 = layers.Input(shape=(5,))
input_3 = layers.Input(shape=(1,))
input_4 = layers.Input(shape=(2,))
model_2 = keras.models.Model([input_1, input_2], [input_3, input_4])
model_2.compile(optimizer='adam',metrics=['accuracy','accuracy'],loss=['mse'])
#Passing correct input and ouputs fails because these are not connected.
model_2.fit([np.random.normal(size=(10,3)),np.random.normal(size=(10,5))], [np.random.normal(size=(10,1)),np.random.normal(size=(10,2))])
```
Got error below which is correct but it is not useful for end users. Instead it should have raised error during graph construction.
```
177 output_tensors = []
178 for x in self.outputs:
--> 179 output_tensors.append(tensor_dict[id(x)])
180
181 return tree.pack_sequence_as(self._outputs_struct, output_tensors)
KeyError: "Exception encountered when calling Functional.call().\n\n\x1b[1m139941182292272\x1b[0m\n\nArguments received by Functional.call():\n • inputs=('tf.Tensor(shape=(None, 3), dtype=float32)', 'tf.Tensor(shape=(None, 5), dtype=float32)')\n • training=True\n • mask=('None', 'None')"
```
I tried to fix an issue similar to case-3 by raising Error during graph build itself in PR #20705 where I noticed this issue related to case1(From failed Test case). Please refer the [gist](https://colab.research.google.com/gist/Surya2k1/88040ebc171b2154627fe54a3560d4b1/functional_model_bug.ipynb).
|
open
|
2025-01-03T16:29:52Z
|
2025-02-05T06:58:14Z
|
https://github.com/keras-team/keras/issues/20719
|
[
"type:Bug"
] |
Surya2k1
| 3 |
PaddlePaddle/ERNIE
|
nlp
| 229 |
ERNIE英文预训练模型
|
ERNIE有公开的英文预训练模型吗
|
closed
|
2019-07-29T09:35:49Z
|
2019-08-19T03:10:26Z
|
https://github.com/PaddlePaddle/ERNIE/issues/229
|
[] |
1234560o
| 1 |
Tinche/aiofiles
|
asyncio
| 90 |
Adding file.peek()
|
Hello!
I noticed that in your README.md there is no mention of your aiofiles supporting [peek](https://docs.python.org/3/library/io.html#io.BufferedReader.peek). However in your code there are some tests for it.
When attempting to use peek in my code, it did not seem to work as desired. Could you please add this functionality?
Many Thanks
|
open
|
2020-12-07T03:49:28Z
|
2020-12-07T03:49:28Z
|
https://github.com/Tinche/aiofiles/issues/90
|
[] |
MajesticMullet
| 0 |
huggingface/text-generation-inference
|
nlp
| 2,332 |
TGI on NVIDIA GH200 (Arm64)
|
### Feature request
Would it be possible to build/publish an arm64 container image for the text-generation-inference? I would like to be able to run it on a NVIDIA GH200 which is an arm64-based system.
Thanks,
Jonathan
### Motivation
Current images won't run on arm64.
### Your contribution
I've tried to build the image myself, but I haven't been able to get it to build successfully.
|
open
|
2024-07-30T11:50:24Z
|
2024-09-03T12:09:32Z
|
https://github.com/huggingface/text-generation-inference/issues/2332
|
[] |
dartcrossett
| 3 |
huggingface/datasets
|
computer-vision
| 6,773 |
Dataset on Hub re-downloads every time?
|
### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:
https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80
Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).
__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.
### Steps to reproduce the bug
1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)
2. Run it in Python `load_borderlines_hf(None)`
3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.
4. If you run it again after some time, it will re-download, ignoring the cache
### Expected behavior
Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0
|
closed
|
2024-04-02T17:23:22Z
|
2024-04-08T18:43:45Z
|
https://github.com/huggingface/datasets/issues/6773
|
[] |
manestay
| 5 |
aws/aws-sdk-pandas
|
pandas
| 2,467 |
OSError: When resolving region for bucket 'ursa-labs-taxi-data': AWS Error [code 99]: curlCode: 35, SSL connect error
|
### Describe the bug
When trying to read data from the example bucket (and any other bucket) the error "OSError: When resolving region for bucket 'ursa-labs-taxi-data': AWS Error [code 99]: curlCode: 35, SSL connect error" is displayed although aws environment variables are set and configured as well as the proxy
### How to Reproduce
```
import awswranger as wr
df = wr.s3.read_parquet(path="s3://ursa-labs-taxi-data/2017/")
```
### Expected behavior
Read data from s3 into the dataframe
### Your project
_No response_
### Screenshots
_No response_
### OS
Red Hat Enterprise Linux release 8.8 (Ootpa)
### Python version
3.8.16
### AWS SDK for pandas version
3.4.0
### Additional context
Through the aws cli services are reachable
|
closed
|
2023-09-21T07:39:42Z
|
2023-10-13T10:16:26Z
|
https://github.com/aws/aws-sdk-pandas/issues/2467
|
[
"bug"
] |
dmarkowvw
| 2 |
opengeos/leafmap
|
streamlit
| 639 |
pmtiles tooltip + layers control is broken with folium > 0.14.0
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.29.5
- Python version: 3.9
- Operating System: Windows/Linux
### Description
When PMTilesLayer tooltip is enabled, it leads to console error and prevents layers control from loading. Layers control is able to load when tooltip=False
This can be seen on the live site as well: https://leafmap.org/notebooks/82_pmtiles/
### What I Did
Followed the example in 82_pmtiles.ipynb and found out that it works properly on one notebook env but not the other. Realized that layers control is missing when I tried to add other layers to the map.
Error in browser console suggests that it may be related to folium/jinja templates:
```
// below line is not present when using folium 0.14.0
macro_element_51d29068fa44734e98b3aa01b7f1db45.addTo(pm_tiles_vector_ff278fae2ca890eb72792f1dc892823e);
```
So the current workaround is to downgrade to folium==0.14.0
folium-pmtiles is affected as well (using this example: https://github.com/jtmiclat/folium-pmtiles/blob/master/example/pmtiles_vector_maplibre.ipynb)
Confirmed to be related to this change in folium: https://github.com/python-visualization/folium/pull/1690#issuecomment-1377180410
|
closed
|
2023-12-15T06:29:30Z
|
2023-12-15T20:51:24Z
|
https://github.com/opengeos/leafmap/issues/639
|
[
"bug"
] |
prusswan
| 1 |
mirumee/ariadne
|
graphql
| 1,092 |
Replace ApolloTracing with simple tracing extension
|
Instead of `ApolloTracing` (which is deprecated) we could implement a simple extension that adds `extensions: {traces: []}` object where each trace would be a separate `{path: time: }`.
I don't know how useful this would be to folk, but maybe it would have some utility as simple example extension?
|
open
|
2023-06-02T17:37:50Z
|
2023-06-05T14:16:26Z
|
https://github.com/mirumee/ariadne/issues/1092
|
[
"enhancement",
"decision needed"
] |
rafalp
| 3 |
2noise/ChatTTS
|
python
| 800 |
bug:main、dev code run main.py has error:lzma.LZMAError: Corrupt input data
|
I use the latest code, in main,dev;i find use api in api/main.py, the web result is error, no voice and server has:lzma.LZMAError: Corrupt input data
1.i try check model,code and pip package,it's run,but generate has no voice and is noise
[audio_files (19).zip](https://github.com/user-attachments/files/17505105/audio_files.19.zip)
2. i set use_vllm, it's work, but generate has noise and other voice,like this:
[audio_files (20).zip](https://github.com/user-attachments/files/17505111/audio_files.20.zip)
|
open
|
2024-10-24T10:00:31Z
|
2025-02-18T02:45:39Z
|
https://github.com/2noise/ChatTTS/issues/800
|
[
"documentation",
"help wanted"
] |
xhjcxxl
| 5 |
comfyanonymous/ComfyUI
|
pytorch
| 7,126 |
CUDA
|
### Your question
CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Logs
```powershell
```
### Other
_No response_
|
open
|
2025-03-08T04:21:39Z
|
2025-03-22T09:27:51Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7126
|
[
"User Support"
] |
cas12e3
| 6 |
xinntao/Real-ESRGAN
|
pytorch
| 192 |
Windows想提升修复速度,需要增强什么硬件?
|
是CPU,还是内存,还是显卡?效果最好的是AMD的还是Intel还是Nvida的呢?
|
open
|
2021-12-20T10:08:09Z
|
2022-02-21T11:34:02Z
|
https://github.com/xinntao/Real-ESRGAN/issues/192
|
[] |
rapmq0
| 3 |
microsoft/qlib
|
deep-learning
| 1,681 |
get
|
## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
## Screenshot
<!-- A screenshot of the error message or anything shouldn't appear-->
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version:
- Python version:
- OS (`Windows`, `Linux`, `MacOS`):
- Commit number (optional, please provide it if you are using the dev version):
## Additional Notes
<!-- Add any other information about the problem here. -->
|
closed
|
2023-10-25T03:15:21Z
|
2023-10-25T03:19:02Z
|
https://github.com/microsoft/qlib/issues/1681
|
[
"bug"
] |
ElonJustin7
| 0 |
skforecast/skforecast
|
scikit-learn
| 281 |
Questions about using the known exogenous variables to conduct forecasted values
|
Hi developers,
I have a little confusion about **using the known exogenous variables** to conduct forecasted values.
Firstly, I use the **multi-series function to "predict"** all the exogenous variables and the target output Y simultaneously. After that, I would like to use the 'predicted values' of the exogenous variables to conduct the direct/recursive forecasting for the target output Y, and I refer the related documents such as the example of [weather exogenous](https://www.cienciadedatos.net/documentos/py39-forecasting-time-series-with-skforecast-xgboost-lightgbm-catboost.html) variables.
However, I have the confusion about dealing with the known future values in an appropriate place because they are the future known values and the format are "not consistent" with the known training data. How can I "combine" them together during using the skforecast framework?
|
closed
|
2022-11-10T02:10:39Z
|
2022-11-10T14:23:50Z
|
https://github.com/skforecast/skforecast/issues/281
|
[
"question"
] |
kennis222
| 2 |
postmanlabs/httpbin
|
api
| 204 |
a way to make stream-bytes produce multiple http chunks
|
Currently (unless I'm missing something) the chunk_size parameter on stream-bytes controls the block size used to write the data to the socket. But the entire data is still sent as one HTTP chunk i.e. there is only one total size followed by the data.
Would it be reasonable to have chunk_size control the size of the HTTP chunks? Or if not, maybe another parameter to control that?
I don't know Python so I'm not sure how feasible that is.
Thanks for a great tool.
|
closed
|
2015-01-20T21:26:33Z
|
2018-04-26T17:51:05Z
|
https://github.com/postmanlabs/httpbin/issues/204
|
[] |
apmckinlay
| 4 |
keras-team/keras
|
machine-learning
| 20,139 |
Keras should support bitwise ops
|
Numpy has bitwise ops https://numpy.org/doc/stable/reference/routines.bitwise.html
TensorFlow has bitwise ops https://www.tensorflow.org/api_docs/python/tf/bitwise
Jax has bitwise ops https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.bitwise_and.html
PyTorch has bitwise ops https://pytorch.org/docs/stable/generated/torch.bitwise_and.html
So it seems natural for Keras to support bitwise ops as well.
|
closed
|
2024-08-20T21:27:41Z
|
2024-08-22T16:27:10Z
|
https://github.com/keras-team/keras/issues/20139
|
[
"type:feature"
] |
kaiyuanw
| 2 |
littlecodersh/ItChat
|
api
| 37 |
登陆报错:Remote end closed connection without response
|
TypeError: getresponse() got an unexpected keyword argument 'buffering'
http.client.RemoteDisconnected: Remote end closed connection without response
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))
|
closed
|
2016-07-17T01:47:14Z
|
2016-07-19T10:55:17Z
|
https://github.com/littlecodersh/ItChat/issues/37
|
[
"invalid"
] |
MrHelix0625
| 3 |
alpacahq/alpaca-trade-api-python
|
rest-api
| 579 |
[Bug/Feature]: long-short.py - Qty should be calculated relative to price of stock
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When opening new positions, the long-short.py opens all the positioning with approximately the same qty in terms of shares (ie: with 100k capital, each position is 160 shares). This does not make sense, the position should be calculated based on the price of the stock.
For example, the code tries to open a long position for AMZN for 160 shares (2.880$/each) and this goes way above the available liquidity. Likewise, it opens a position of 160 shares for GM (44.85$/each).
I've tried to fix the code but I'm not sure about all the references, so it'd be cool if someone from Alpaca could fix this.
### Expected Behavior
_No response_
### Steps To Reproduce
_No response_
### Anything else?
_No response_
|
open
|
2022-02-24T15:43:53Z
|
2022-04-29T15:18:30Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/579
|
[
"enhancement"
] |
Dylan-86
| 1 |
davidteather/TikTok-Api
|
api
| 564 |
by_sound method failing
|
**Describe the bug**
by_sound method returns an error
**The buggy code**
```
TikTokApi.get_instance(proxy=sys.argv[3]).by_sound(sys.argv[1], int(sys.argv[2]))
```
**Expected behavior**
by_sound method returns an error
**Error Trace (if any)**
```
raise EmptyResponseError(\nTikTokApi.exceptions.EmptyResponseError: Empty response from Tiktok to https://m.tiktok.com/api/music/item_list/?aid=1988&app_name=tiktok_web&device_platform=web&referer=&root_referer=&user_agent=Mozilla%252F5.0%2B%28iPhone%253B%2BCPU%2BiPhone%2BOS%2B12_2%2Blike%2BMac%2BOS%2BX%29%2BAppleWebKit%252F605.1.15%2B%28KHTML%2C%2Blike%2BGecko%29%2BVersion%252F13.0%2BMobile%252F15E148%2BSafari%252F604.1&cookie_enabled=true&screen_width=1858&screen_height=1430&browser_language=&browser_platform=&browser_name=&browser_version=&browser_online=true&ac=4g&timezone_name=&appId=1233&appType=m&isAndroid=False&isMobile=False&isIOS=False&OS=windows&secUid=&musicID=6763054442704145158&count=35&cursor=0&shareUid=&language=en&verifyFp=verify_khr3jabg_V7ucdslq_Vrw9_4KPb_AJ1b_Ks706M8zIJTq&did=4792150057109208114&_signature=_02B4Z6wo00f01X4-6mwAAIBDLD1T3YFqwoV-P-7AAD.80b\n
```
**Desktop (please complete the following information):**
- OS: MacOS
- TikTokApi Version 3.9.5
|
closed
|
2021-04-15T09:20:08Z
|
2021-04-15T22:45:44Z
|
https://github.com/davidteather/TikTok-Api/issues/564
|
[
"bug"
] |
nikolamajmunovic
| 2 |
home-assistant/core
|
asyncio
| 140,652 |
[Overkiz] - Unsupported value CyclicSwingingGateOpener
|
### The problem
I have in my Tahoma an association with Netatmo station and legrand drivia switch (https://www.legrand.fr/pro/catalogue/pack-de-demarrage-drivia-with-netatmo-pour-installation-connectee-1-module-control-1-contacteur-connecte).
In the system log, I see an error related my Netatmo on Overkiz.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Overkiz
### Link to integration documentation on our website
_No response_
### Diagnostics information
Enregistreur: pyoverkiz.enums.ui
Source: components/overkiz/__init__.py:87
S'est produit pour la première fois: 14 mars 2025 à 19:53:28 (3 occurrences)
Dernier enregistrement: 14 mars 2025 à 19:53:28
Unsupported value CyclicSwingingGateOpener has been returned for <enum 'UIWidget'>
Unsupported value NetatmoGateway has been returned for <enum 'UIWidget'>
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_
|
closed
|
2025-03-15T07:57:42Z
|
2025-03-22T18:40:49Z
|
https://github.com/home-assistant/core/issues/140652
|
[
"integration: overkiz"
] |
alsmaison
| 3 |
FlareSolverr/FlareSolverr
|
api
| 1,314 |
Error solving the challenge.
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:3.3.21
- Last working FlareSolverr version:none
- Operating system:win10
- Are you using Docker: [yes/no]no
- FlareSolverr User-Agent (see log traces or / endpoint)Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no]no
- Are you using a Proxy: [yes/no]no
- Are you using Captcha Solver: [yes/no]no
- If using captcha solver, which one:no
- URL to test this issue:https://unitedshop.ws/
```
### Description
check pics for more info
### Logged Error Messages
```text
500 internal server error
(Failed to recieve the response from the HTTP-server 'localhost'.)in silverbullet
```
### Screenshots


|
closed
|
2024-08-05T16:43:03Z
|
2024-08-05T22:16:56Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1314
|
[
"duplicate"
] |
Ladjal1997
| 0 |
PeterL1n/RobustVideoMatting
|
computer-vision
| 268 |
Encode the video output in HEVC instead h264 ?
|
Hello there , how i can make RVM encode the video output in HEVC instead h264 ? And is it possible to get the sound from the original vid too ? Thank you in advance !
|
open
|
2024-07-20T10:26:53Z
|
2024-07-20T10:26:53Z
|
https://github.com/PeterL1n/RobustVideoMatting/issues/268
|
[] |
web2299
| 0 |
ClimbsRocks/auto_ml
|
scikit-learn
| 151 |
make nlp work for individual dictionaries
|
right now it only works with dataframes.
we'll probably want to move this whole block into fit:
```
col_names = self.text_columns[key].get_feature_names()
# Make weird characters play nice, or just ignore them :)
for idx, word in enumerate(col_names):
try:
col_names[idx] = str(word)
except:
col_names[idx] = 'non_ascii_word_' + str(idx)
col_names = ['nlp_' + key + '_' + str(word) for word in col_names]
```
|
closed
|
2016-12-14T19:02:18Z
|
2017-03-12T01:17:17Z
|
https://github.com/ClimbsRocks/auto_ml/issues/151
|
[] |
ClimbsRocks
| 1 |
fastapi/sqlmodel
|
sqlalchemy
| 1,218 |
Do you support importing Asynchronous Sessions from sqlmodel?
|
### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
`from sqlmodel import AsyncSession`
|
open
|
2024-11-18T12:24:46Z
|
2025-02-18T03:54:57Z
|
https://github.com/fastapi/sqlmodel/issues/1218
|
[] |
yuanjie-ai
| 4 |
FlareSolverr/FlareSolverr
|
api
| 409 |
The cookies provided by FlareSolverr are not valid
|
**Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
|
closed
|
2022-06-17T18:35:54Z
|
2022-06-17T23:51:59Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/409
|
[
"invalid"
] |
Remulos44
| 1 |
Ehco1996/django-sspanel
|
django
| 114 |
用diango开发的全新的shadowsocks网络面板
|
diango..
|
closed
|
2018-05-05T05:26:22Z
|
2018-05-08T14:08:14Z
|
https://github.com/Ehco1996/django-sspanel/issues/114
|
[] |
shjdssxsyydaw
| 0 |
nltk/nltk
|
nlp
| 2,961 |
nltk.lm.api entropy formula source?
|
Hi! I've been experimenting with training and testing a standard trigram language model on my own dataset. Upon investigating the `entropy` method of the LM class, I was a bit confused. The [docs](https://www.nltk.org/api/nltk.lm.api.html#nltk.lm.api.LanguageModel.entropy) only mention a very brief description: "Calculate cross-entropy of model for given evaluation text."
It seems to me that this entropy measure just **averages the ngram negative log probability** (so trigram in my case) and **makes it positive** by multiplying by -1:
```python
def _mean(items):
"""Return average (aka mean) for sequence of items."""
return sum(items) / len(items)
def entropy(self, text_ngrams):
"""Calculate cross-entropy of model for given evaluation text.
:param Iterable(tuple(str)) text_ngrams: A sequence of ngram tuples.
:rtype: float
"""
return -1 * _mean(
[self.logscore(ngram[-1], ngram[:-1]) for ngram in text_ngrams]
)
```
**So my general question is: Where does this formula for entropy stem from? Is there any paper referencing the method?** I'm just a bit stuck with the different versions of entropy that exist and I don't know which one is used here (and therefore I don't know how to interpret it correctly).
### Other formulas of entropy/perplexity
As a formula I know the [Shannon entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)) which in Python would be:
```python
def shannon_entropy(self, text_ngrams):
return -1 * sum(
[self.score(ngram[-1], ngram[:-1]) * self.logscore(ngram[-1], ngram[:-1]) for ngram in text_ngrams]
)
```
And there's also the perplexity formula of [Jurafsky](https://www.youtube.com/watch?v=NCyCkgMLRiY), which returns a different score than `lm.perplexity` (which is 2**entropy):
```python
from numpy import prod
def jurafsky_perplexity(self, text_ngrams):
problist = [self.score(ngram[-1], ngram[:-1]) for ngram in text_ngrams]
return pow(prod(problist), -1/len(problist)
```
Thanks!
PS: My apologies if this issue lacks some information - I don't often open issues on Github 😅 Let me know if I need to update something!
UPDATE: my Python implementation of the Jurafsky perplexity was totally wrong as I had quickly written it. I've updated it to reflect the actual scores from the web lecture.
|
closed
|
2022-03-11T16:50:33Z
|
2024-01-29T09:04:40Z
|
https://github.com/nltk/nltk/issues/2961
|
[
"language-model"
] |
mbauwens
| 18 |
wger-project/wger
|
django
| 1,570 |
Database encryption
|
Add encryption at rest to the database so the server's owner can't access user's data. Account password should be the master key to decrypt the database.
|
open
|
2024-01-28T20:41:34Z
|
2024-01-28T20:41:34Z
|
https://github.com/wger-project/wger/issues/1570
|
[] |
m3thm4th
| 0 |
CTFd/CTFd
|
flask
| 1,962 |
Docker compose stuck on waiting for my sql
|
I cloned the repo and did docker-compose build. It ran successfully. I then did docker-compose up. This is stuck on Waiting for mysql+pymysql...
```
docker-compose up
Starting ctfd_db_1 ... done
Starting ctfd_cache_1 ... done
Starting ctfd_ctfd_1 ... done
Starting ctfd_nginx_1 ... done
Attaching to ctfd_cache_1, ctfd_db_1, ctfd_ctfd_1, ctfd_nginx_1
db_1 | 2021-07-24 17:57:00+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.4.12+maria~bionic started.
cache_1 | 1:C 24 Jul 17:56:59.849 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
cache_1 | 1:C 24 Jul 17:56:59.849 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
cache_1 | 1:C 24 Jul 17:56:59.849 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
db_1 | 2021-07-24 17:57:01+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
cache_1 | 1:M 24 Jul 17:56:59.903 * Running mode=standalone, port=6379.
cache_1 | 1:M 24 Jul 17:56:59.903 # Server initialized
cache_1 | 1:M 24 Jul 17:56:59.903 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
cache_1 | 1:M 24 Jul 17:56:59.903 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1 | 2021-07-24 17:57:01+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.4.12+maria~bionic started.
cache_1 | 1:M 24 Jul 17:56:59.903 * DB loaded from disk: 0.000 seconds
cache_1 | 1:M 24 Jul 17:56:59.903 * Ready to accept connections
db_1 | 2021-07-24 17:57:02 0 [Note] mysqld (mysqld 10.4.12-MariaDB-1:10.4.12+maria~bionic) starting as process 1 ...
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Using Linux native AIO
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Uses event mutexes
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Number of pools: 1
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Using SSE2 crc32 instructions
db_1 | 2021-07-24 17:57:02 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts)
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Initializing buffer pool, total size = 256M, instances = 1, chunk size = 128M
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: Completed initialization of buffer pool
db_1 | 2021-07-24 17:57:02 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: Creating shared tablespace for temporary tables
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: Waiting for purge to start
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: 10.4.12 started; log sequence number 61164; transaction id 21
db_1 | 2021-07-24 17:57:03 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
db_1 | 2021-07-24 17:57:04 0 [Note] Server socket created on IP: '::'.
db_1 | 2021-07-24 17:57:04 0 [Warning] 'user' entry 'root@c6e0dcba624f' ignored in --skip-name-resolve mode.
db_1 | 2021-07-24 17:57:04 0 [Warning] 'user' entry '@c6e0dcba624f' ignored in --skip-name-resolve mode.
db_1 | 2021-07-24 17:57:04 0 [Warning] 'proxies_priv' entry '@% root@c6e0dcba624f' ignored in --skip-name-resolve mode.
db_1 | 2021-07-24 17:57:04 0 [Note] InnoDB: Buffer pool(s) load completed at 210724 17:57:04
db_1 | 2021-07-24 17:57:04 0 [Note] mysqld: ready for connections.
db_1 | Version: '10.4.12-MariaDB-1:10.4.12+maria~bionic' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
ctfd_1 | Waiting for mysql+pymysql://ctfd:ctfd@db to be ready
```
|
open
|
2021-07-24T18:00:58Z
|
2023-05-05T08:09:00Z
|
https://github.com/CTFd/CTFd/issues/1962
|
[] |
rajatagarwal457
| 11 |
coqui-ai/TTS
|
pytorch
| 2,438 |
[Feature request] Implementation for FreeVC
|
<!-- Welcome to the 🐸TTS project!
We are excited to see your interest and appreciate your support! --->
**🚀 Feature Description**
Original implementation: https://github.com/OlaWod/FreeVC
We can use it in combination with any 🐸TTS model and make it speak with any voice aka faking voice cloning.
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Additional context**
I'll initially implement only inference with the pre-trained models and later we can implement the training. It should be easy as it a quite the same as the VITS.
<!-- Add any other context or screenshots about the feature request here. -->
|
closed
|
2023-03-20T13:54:57Z
|
2024-02-24T05:59:56Z
|
https://github.com/coqui-ai/TTS/issues/2438
|
[
"feature request"
] |
erogol
| 4 |
slackapi/python-slack-sdk
|
asyncio
| 1,491 |
Why we don't use certifi root certificates by default?
|
Hey there,
Today my colleauge step into the problem with ceritificate that time to time appears with other people who use your client.
You can read about this problem more here:
https://stackoverflow.com/questions/59808346/python-3-slack-client-ssl-sslcertverificationerror
As I see the problem that people have some problem with root certificates on their machine.
Python always had a problem with root certificatates and to solve this problem people started to use the project certifi.
For example you can find usage of this project in the kinda popular library `requests`
https://github.com/psf/requests/blob/main/src/requests/utils.py#L63
https://github.com/psf/requests/blob/main/src/requests/adapters.py#L294
Do you have any reason to don't use library certifi as requests library uses?
I think it would simplify of using your library everywhere.
|
open
|
2024-04-23T13:57:26Z
|
2024-04-24T18:34:52Z
|
https://github.com/slackapi/python-slack-sdk/issues/1491
|
[
"enhancement",
"question"
] |
mangin
| 7 |
pydata/xarray
|
pandas
| 9,696 |
Sliding window (mix of rolling and coarsen)
|
### Is your feature request related to a problem?
I often have functions to apply to sliding windows, such as a fft computation, but both coarsen and rolling do not fit. For example, generating a spectrogram with overlap between windows is complicated using xarray. Of course, one could use [scipy STFT](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.ShortTimeFFT.html) if we consider the example of the spectrogram, but one still needs to manage the coordinates manually.
### Describe the solution you'd like
A new window function that combines rolling and coarsen. The new window would have the current dim parameter of rolling and coarsen split into two:
- window_size: the size of the window (equivalent to the current value of the dim parameter)
- hop: how much do we shift in between each window (see the hop parameter of [scipy STFT](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.ShortTimeFFT.html) for example).
- perhaps we could add a window function to apply, but this is not necessary as this can be done without.
This unifies rolling and coarsen as rolling is simply hop=1 and coarsen is hop=window_size.
As for the implementation of this datastructure, I suspect we could use [as_strided](https://numpy.org/devdocs/reference/generated/numpy.lib.stride_tricks.as_strided.html) for very efficient implementation of construct.
### Describe alternatives you've considered
- Using rolling + sel to only pick the windows with the correct hop, but this seems extremely inefficient
- Handling the coordinates manually...
### Additional context
_No response_
|
closed
|
2024-10-29T18:05:37Z
|
2024-11-01T15:15:42Z
|
https://github.com/pydata/xarray/issues/9696
|
[
"enhancement",
"topic-rolling"
] |
JulienBrn
| 3 |
fastapi-users/fastapi-users
|
asyncio
| 162 |
Can't connect to mongo docker
|
okey, spend a day, but still can't make this work
> .env
```
# Mongo DB
MONGO_INITDB_ROOT_USERNAME=admin-user
MONGO_INITDB_ROOT_PASSWORD=admin-password
MONGO_INITDB_DATABASE=container
```
> docker-compose.yml
```
mongo-db:
image: mongo:4.2.3
env_file:
- .env
ports:
- 27017:27107
volumes:
- ./bin/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
# restart: always add this production
api:
build:
context: ./backend
dockerfile: Dockerfile
command: uvicorn app.main:app --host 0.0.0.0 --port 8006 --reload
volumes:
- ./backend:/app
env_file:
- .env
depends_on:
- mongo-db
ports:
- "8006:8006"
```
> mongo-init.js
```
db.auth('admin-user', 'admin-password')
db = db.getSiblingDB('container')
db.createUser({
user: 'test-user',
pwd: 'test-password',
roles: [
{
role: 'root',
db: 'admin',
},
],
});
```
using this as example - https://frankie567.github.io/fastapi-users/configuration/full_example/
changed few lines:
`DATABASE_URL = "mongodb://test-user:test-password@mongo-db/container"`
```
client = motor.motor_asyncio.AsyncIOMotorClient(DATABASE_URL)
db = client["container"]
collection = db["users"]
user_db = MongoDBUserDatabase(UserDB, collection)
```
so think Fatapi is linked to Mongo container

but when i try to register user using Fastapi docks section, i get
"Internal server error"
what I left unfinished?
|
closed
|
2020-04-23T14:16:15Z
|
2020-04-24T07:15:56Z
|
https://github.com/fastapi-users/fastapi-users/issues/162
|
[
"question"
] |
galvakojis
| 2 |
pallets/flask
|
flask
| 5,642 |
PROVIDE_AUTOMATIC_OPTIONS causes KeyError if not set
|
https://github.com/pallets/flask/blob/bc098406af9537aacc436cb2ea777fbc9ff4c5aa/src/flask/sansio/app.py#L641C12-L641C86
Simply changing this to : `self.config.get("PROVIDE_AUTOMATIC_OPTIONS", False)` should resolve the problem.
This change now released is causing upstream trouble in other packages such as Quart:
https://github.com/pallets/quart/issues/371
|
closed
|
2024-11-14T12:01:51Z
|
2024-11-14T16:55:57Z
|
https://github.com/pallets/flask/issues/5642
|
[] |
develerltd
| 3 |
igorbenav/FastAPI-boilerplate
|
sqlalchemy
| 89 |
typing.Dict, typing.List, typing.Type, ... deprecated
|
With `ruff check`:
```sh
src/app/api/paginated.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/paginated.py:1:1: UP035 `typing.List` is deprecated, use `list` instead
src/app/api/v1/login.py:2:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/v1/logout.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/v1/posts.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/v1/rate_limits.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/v1/tasks.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/v1/tiers.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/api/v1/users.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/core/security.py:2:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/core/setup.py:1:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/core/utils/cache.py:5:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/core/utils/cache.py:5:1: UP035 `typing.List` is deprecated, use `list` instead
src/app/core/utils/cache.py:5:1: UP035 `typing.Tuple` is deprecated, use `tuple` instead
src/app/crud/crud_base.py:2:1: UP035 `typing.Dict` is deprecated, use `dict` instead
src/app/crud/crud_base.py:2:1: UP035 `typing.List` is deprecated, use `list` instead
src/app/crud/crud_base.py:2:1: UP035 `typing.Type` is deprecated, use `type` instead
src/app/crud/helper.py:1:1: UP035 `typing.List` is deprecated, use `list` instead
src/app/crud/helper.py:1:1: UP035 `typing.Type` is deprecated, use `type` instead
Found 19 errors.
```
|
closed
|
2023-12-31T19:46:00Z
|
2024-01-01T23:21:12Z
|
https://github.com/igorbenav/FastAPI-boilerplate/issues/89
|
[
"bug",
"good first issue"
] |
igorbenav
| 3 |
huggingface/datasets
|
nlp
| 6,717 |
`remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio
|
### Describe the bug
When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated.
### Steps to reproduce the bug
Minimal error code:
```python
from datasets import load_dataset
dataset_name = "zinc75/Vibravox_dummy"
config_name = "BWE_Larynx_microphone"
# if we use "ASR_Larynx_microphone" subset which is a monochannel audio, no error is thrown.
dataset = load_dataset(
path=dataset_name, name=config_name, split="train", streaming=True
)
dataset = dataset.remove_columns(["sensor_id"])
# dataset = dataset.map(lambda x:x, remove_columns=["sensor_id"])
# The commented version does not produce an error, but loses the dataset features.
sample = next(iter(dataset))
```
Error:
```
Traceback (most recent call last):
File "/home/julien/Bureau/github/vibravox/tmp.py", line 15, in <module>
sample = next(iter(dataset))
^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1392, in __iter__
example = _apply_feature_types_on_example(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1080, in _apply_feature_types_on_example
encoded_example = features.encode_example(example)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1889, in encode_example
return encode_nested_example(self, example)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in encode_nested_example
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in <dictcomp>
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1300, in encode_nested_example
return schema.encode_example(obj) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/audio.py", line 98, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 343, in write
with SoundFile(file, 'w', samplerate, channels,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7fd795d24680>: Format not recognised.
Process finished with exit code 1
```
### Expected behavior
I would expect this code to run without error.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
- Python version: 3.11.0
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0
|
open
|
2024-03-05T09:33:26Z
|
2024-08-14T17:54:20Z
|
https://github.com/huggingface/datasets/issues/6717
|
[] |
jhauret
| 2 |
tensorflow/tensor2tensor
|
machine-learning
| 944 |
Distributed Training with 4 machines on Translation Task
|
Hello , I want to do distributed training using four machines ,each one has 8 1080ti GPUs on En-Zh translation task, and the t2t-version is 1.6.5. I have seen the other similar issues , and
the distributed_training.md ,but I still have some confusion. If the four machines named M1, M2 ,M3 ,M4 , the first step is to make the TF-Config. What I want is to make 4 machines to train together , thus I am not sure how to define the master and ps. And the second question is if four machines can do synchronous training ?
Hope someone can give me some advice, thanks
|
open
|
2018-07-18T02:23:58Z
|
2018-08-16T20:34:24Z
|
https://github.com/tensorflow/tensor2tensor/issues/944
|
[] |
libeineu
| 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.