add a new Claude-based workflow for when dependabot opens a pr to have Claude review it. Base it on the claude.yml workflow and make sure to include the existing setup, just add a custom prompt. research the best way to do this with the claude github action and make it look up the change log for the dependobot for all the changed dependencies + check them for breaking changes + let us know if we're impacted
- Last 7 days
-
github.com github.com
-
-
Additional Observations:
-
Correct Approach
-
Recommended Fix:
-
Findings:
-
-
github.com github.com
-
The agent blocks are missing their input/output pins because the input_schema and output_schema properties are not being populated in the GraphMeta objects when flows are loaded. When these are undefined, the CustomNode component falls back to empty schemas {}, resulting in no pins being rendered.
-
When rendered in CustomNode.tsx (lines 132-137), agent blocks replace their schema with the hardcoded values:
-
The Likely Cause:
-
Root Cause Identified
-
-
github.com github.com
-
The Fix Applied
-
Root Cause Analysis
-
Successfully fixed the TypeError that occurred when the DataForSEO API returns an unexpected response structure where items could be None.
-
Added a null check in autogpt_platform/backend/backend/blocks/dataforseo/related_keywords.py to ensure items is never None before iterating Verified that existing tests still pass after the fix
-
-
github.com github.com
-
I also just ran into this issue after cloning from master a few hours ago; message_agent went over the limit once, after which subsequent calls also failed. Telling the system to delete and re-create the agent got it past the bottleneck. Maybe some way to restrict the history provided to sub-agents would work?
-
Basically max is 8192 tokens in this context, lowering that will force it to split something less into chunks IE: def split_text(text: str, max_length: int = 4192) -> Generator[str, None, None]: basically would split anything above that I believe. It's linked into messages and other functs
-
Should be fixed in #2542 just now. Please pull master, check that your .env is up to date with .env.template, try again, and let us know if it's still broken for you.
-
Maybe try changing this line in autogpt/processing/text.py ? def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: honestly, I'm still checking to see if that'd be it, but doubtful lol
-
My issue is usually generated by the browse function so im changing this: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300)) To: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 2192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300))
-
'm running into same, we need to limit certain chunks I think. Should be able to change chunk size to fix, not sure if that'll fix the total token amount, but we can make the summaries smaller too.
-
-
github.com github.com
-
Here is how you can check if there is some bottleneck on your machine: Instead of running ./run.bat (or ./run.sh) you can run: python -m cProfile -o profile.pickle -m autogpt
-
-
github.com github.com
-
This is a prompting issue and a limitation of LLMs
-
I just deleted the three lines in autogpt/promt.py. Maybe not the nicest solution, but works so far.
-
Could make do_nothing a lower variablity maybe?
-
-
github.com github.com
-
I then realized after looking into the docker container while the project is running, autogpt is in fact writing files to this directory /app/autogpt/workspace/auto_gpt_workspace . Though it's only accessible via the running docker container via Terminal. Though due to the nature of docker containers, as soon as you exit the running AutoGPT, you will lose any documents it creates. So it could be that running this project via docker has a particular issue moving the files back out whenever it completes a write to a file. I'm totally new to AutoGPT, I just set it up yesterday & I will try to investigate why this issue is happening.
-
Results are not written to a file (disappointing ongoing issue) #3583
-
-
github.com github.com
-
After changing the docker-compose volume, it worked
-
I'm also running into this problem. I've confirmed that writing to the workspace within Docker yields the expected result, so I know it isn't a problem with my use of Docker
-
Looks like you're using Docker? If so, this worked for me: create auto-gpt.json in project root. mount auto-gpt.json with docker run command; e.g.:
-
Maybe you can try again with: o EXECUTE_LOCAL_COMMANDS=false o RESTRICT_TO_WORKSPACE=true See if the file is written to the folder " auto_gpt_workspace folde
-
-
github.com github.com
-
Thanks for the tip. I had to enable Virtual Maschine in BIOS to run the Docker now. (...)! I believe it worked! One strange thing though, as you can see: it first states it cant find the file, then proceeds to read the output of the fily anyway -meaning it found the file-: Executing file 'generate_dinner_recipe.py' in workspace 'auto_gpt_workspace' [2023-04-07T03:22:43.847792900Z][docker-credential-desktop.EXE][W] Windows version might not be up-to-date: The system cannot find the file specified. SYSTEM: Command execute_python_file returned: (...) BUT, I can now read from executing files, which feels amazing, like this was a big step and THANK you
-
Make sure docker is installed and you have permission to run docker commands docker run hello-world
-
- Sep 2025
-
github.com github.com
-
""Handles loading of plugins."""
-
Hehe, just remove the software - no problem then. Doesn't quite fix anything. If looking for a fix, you could check out my previous comment.
-
fixed here
-
Its the plugins that need updating.
-
I also think is that, I find that plugins_config (at line 178 Auto-GPT-stable/autogpt/plugins/init.py) is always empty, no matter if is configured "correctly", which is not clear how they need to properly be defined considering every plugin developer use a different name for the plugins (ones with dashes, others without, etc..) besides some are not yet implementing the template plugin class. but also in what I read say that this is not yet strictly necessary. They talk about a "as long as they are in the correct (NEW) format" but it is not clear either what this is.
-
-
github.com github.com
-
This is only possible after each chunk of continuously executed steps. I almost never let it run continuously since it often goes wild or runs the same thing in loops without getting anything done.
-
Whenever it has completed auto-tasks (I usually do tasks in blocks of 50 (y -50) to (y -200). You can type in a message to it instead of typing y -xx or n (to exit). It will say it doesn't understand but typically "fixes" itself and sometimes will accept what you've written.
-
i put my pinecone key and region in the .env file but autogpt only uses localmemory and never writes anything to a file. It also seems to forget what it has already researched after a few steps.
-
but i try pincone,it seems useless…i will try again
-
Try using something different than the local memory. I downloaded the code 5 days ago so I don't know if it has been changed but inside the config.py file in the scripts folder on line 75, the memory backend is hardcoded to local. Change local to pinecone and use a pinecone API key if you want.
-
Unable to write file to local. Always report json error. I have been trying for more than ten hours since yesterday. I wasted a lot of openAI tokens and a lot of my time. Please, can you solve this first? ?
-
-
github.com github.com
-
Describe the problem
-
-
github.com github.com
-
Both libraries use signals as a identifier which leads to a namespace collision.
-
#undef signals
-
Standalone code to reproduce the issue
-
Using MSVC 2022, but same error occurs while using LLVM/MSVC2019 compiler with my C++ application (in Qt 6.5.0)
-
-
github.com github.com
-
approved
-
config["callbacks"] = [*callbacks, mlflow_callback]
-
add test … 439cbd4
test case added
-
-
github.com github.com
-
Hi @cheyennee , OOM errors are not problem with Tensorflow and with low number of filters it is indeed working. Oh Sorry,I missed this from your logs.
-
OOM when allocating tensor with shape[50982027264] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node gradient_tape/UnsortedSegmentSum/pfor/UnsortedSegmentSum}}]]
-
It may be that the size is too large and the memory is overflowing.
-
Conv3DTranspose_class = tf.keras.layers.Conv3DTranspose(filters, kernel_size, strides=strides, padding=padding, output_padding=output_padding, data_format=data_format, dilation_rate=dilation_rate, activation=activation, use_bias=use_bias, kernel_initializer=kernel_initializer, bias_initializer=bias_initializer, kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer, activity_regularizer=activity_regularizer, kernel_constraint=kernel_constraint, bias_constraint=bias_constraint) layer = Conv3DTranspose_class inputs = __input___0 with tf.GradientTape() as g:
-
During backpropagation, the API crashes. Based on the error message, it appears to be indicative of an OOM situation.
-
- Aug 2025
-
github.com github.com
-
The issue has been resolved. Installing tf-keras and using keras from it instead of tf.keras fixed the problem. Thank you!
-
the gist notebook executed successfully however am still getting the error on this machine :
-
Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
-
The solution where I installed tf_keras worked for that section, but I’m encountering a similar error in the "Attach a classification head" section of the same notebook. However, the previous solution does not seem to work in this case.
-
Also I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here.
-
Hi, By default the colab notebook is using tensorflow v2.17 which contains keras3.0 which was causing the error. Could you please try to import keras2.0 with the below commands.
-
added
-
ValueError: Only instances of `keras.Layer` can be added to a Sequential model. Received: <tensorflow_hub.keras_layer.KerasLayer object at 0x7d7e43bbeb10> (of type <class 'tensorflow_hub.keras_layer.KerasLayer'>)
-
import tensorflow as tf import tensorflow_hub as hub mobilenet_v2 = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4" inception_v3 = "https://tfhub.dev/google/imagenet/inception_v3/classification/5" classifier_model = mobilenet_v2 # @param ["mobilenet_v2", "inception_v3"] {type:"raw"} IMAGE_SHAPE = (224, 224) classifier = tf.keras.Sequential([ hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE + (3,)) ]) link to notebook: "https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"
-
When attempting to add hub.KerasLayer to a tf.keras.Sequential model, TensorFlow raises a ValueError, stating that only instances of keras.Layer can be added. However, hub.KerasLayer is a subclass of keras.Layer, so this behavior seems unexpected. I expected hub.KerasLayer to be accepted as a valid layer in the tf.keras.Sequential model, as per the TensorFlow documentation.
-
- Jun 2025
-
github.com github.com
-
[XLA:GPU] Check whether the propagated tile offsets can be used. …
Commit message describes the model interpretability checking in details
-
"nvcc --compiler-bindir /path/to/clang" sets __clang__ while compiling CUDA code. This causes gpu_device_functions.h to think it is being compiled with Clang and try to use a Clang-specific function.
-
-
github.com github.com
-
sklearn 1.6 installed with conda (above) sklearn 1.3 installed with conda sklearn 1.6 installed with pip
Cross Version Validation
-
-
github.com github.com
-
return
Fixed some miscalleneous issues after reviewing
-
-
-
ONNX 1.17.1 prints:
Cross validation
Tags
Annotators
URL
-
-
github.com github.com
-
This PR fixes a C compatibility issue in the TfLiteQuantizationType enum definition. The current definition uses C++ syntax (enum : int) which causes compilation errors when included in C projects. This PR adds conditional compilation directives to use C++ syntax only when compiling with C++.
-
-
github.com github.com
-
But I think it was a lot of gdb backtrace that leads me to the file.
-
Whenever FreezeSavedModel function is called in the code, tensorflow::ClientSession cannot execute properly. Things work absolutely fine if FreezeSavedModel function is commented out.
-
-
github.com github.com
-
. TF version: 1.13.1 &, bazel is 0.19.2. Platform:
-
. It seems that it is getting confused with the double quotes .
-
Facing the same issue here. My Os is Ubuntu 18.04.
-
-
github.com github.com
-
TOCO applies a set of optimizations to reduce the model size, improve inference speed, and ensure compatibility with the target platform.
-
we are converting them to reshapes so that we can use standard reshape optimization transforms"
-
The behavior you mentioned, where tf.squeeze() is converted to a reshape operator when using the TOCO (Tens
-
-
github.com github.com
-
Please use pip<24.1 if you need to use this version.
-
absl-py (<0.11>=0.10.0)
-
Ignoring version 0.3.1.dev202105110329 of tflite-model-maker-nightly since it has invalid metadata:
-
I'm able to replicate the same behavior from my end
-
-
github.com github.com
-
Are you satisfied with the resolution of your issue?
-
Standalone code to reproduce the issue
-
-
github.com github.com
-
an m1 mac running big sur, homebrew python 3.8.12, pip version 21.3, using the tensorflow_macos virtual environment
-
Taking @alfaro96's comment as a hint, I tried another pip install but with Python 3.8, and it worked. Hope this helps other people.
-
No module named pip
-
pip install --pre --extra-index https://pypi.anaconda.org/scipy-wheels-nightly/simple scikit-learn " this works too if you are on 3.9
-
We have not released a version supporting Python 3.9 yet in PyPi.
-
-
github.com github.com
-
https://reviews.llvm.org/D81045 should help here.
-
That's right - that would be the minimal form - the copy has nothing to do with this issue. Not a high priority issue, but would be good to fail verification in such cases.
validated
-
So the issue is with the custom terminator which isn't strict about the possible parent operations? Then the minimal example is something like:
-
To reproduce: $ bazel-bin/tensorflow/compiler/mlir/tf-opt verify.mlir.
-
-
github.com github.com
-
but indeed we also can stop building/testing our python 3.13 wheels with numpy nightly, so updating that in #59819
config updated
-
Note that numpy 2.1.1 has Python 3.13 wheels and still has np._get_promotion_state so one option would be to switch to released numpy.
-
BUG: Remove np._get_promotion_state usage #59818
-
lestevementioned this on Sep 16, 2024⚠️ CI failed on Wheel builder (last failure: Sep 16, 2024) ⚠️ scikit-learn/scikit-learn#29852
-
AttributeError: module 'numpy' has no attribute '_get_promotion_state'
-
-
github.com github.com
-
This guild details the migration from Estimator to Keras APIs https://www.tensorflow.org/guide/migrate/migrating_estimator
Migration guidance and deprecation info shared
-
Faced the same issue with tensorflow 2.3.0 and tensorflow-hub 0.10.0. I just solved it by uninstalling tensorflow-estimator, tensorflow-hub and tensorflow, then installed tensorflow and tensorflow-hub. Now it's working :-)
-
reinstall tensor flow and tensor flow hub and it worked :-)
-
So they may change between versions. My guess is that for me it was an installation issue
-
tf.estimator.Estimator(model_fn)
-
-
github.com github.com
-
This was released as part of modelstore==0.0.75
-
! I can confirm that it works with the latest main
-
: I try exists(), if that fails with a ValueError, I try to list_blobs();
-
It is triggered when bucket.exists() is called,
-
I've managed to reproduce this error without modelstore.
-
-
github.com github.com
-
TF2.14For issues related to Tensorflow 2.14.xFor issues related to Tensorflow 2.14.xcomp:keras
-
-
github.com github.com
-
The Tensorflow team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TensorFlow version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
-
we require input indices to be unique. Otherwise, not only is the output non-deterministic on GPU, but gradients are broken on any device.
-
I was able to reproduce the issue on tensorflow v2.8, v2.9 and nightly. Kindly find the gist of it here.
-
-
github.com github.com
-
edited by ekdnamEditsIssue body actions
Edited the post to revise
-
so, some weights in new model have different shape compared to the old model.
-
Could you please submit a minimal code snippet for reproduction of issue. Also the dataset modified_train.txt is missing.
-
- May 2025
-
github.com github.com
-
keras_core subsequently will release as Keras3 and tf.keras module will become legacy code. Thanks!
dead code removed
-
I have checked the code with keras_core which is now a multi backend support library. In keras_core there is no reported behaviour. Please refer to attached gist.
-
loop in reconstruct_from_config()
-
I was able to replicate this issue on colab, please find the gist here. Thank you!
-
-
github.com github.com
-
The area which corresponds to the difference in the lite versions are actually different in the h5 files. Furthermore, the dimensionality of some elements have changed from the start to the red arrow and downwards. Can either of you try again when the model architecture is identical prior to conversion?
-
I found that the branch of "Quantize "node for concat OP quantization is different (as shown in the figure below).letf is ok, and right is bad.
-
t is not clear how you are getting different results on quantization perhaps you can explain more. Are you observing incorrect results with tf.compat.v1.lite.TFLiteConverter.from_keras_model_file and correct results with tf.lite.TFLiteConverter.from_keras_model ?
-
-
github.com github.com
-
copybara-service bot merged commit 749de42 into master Feb 12, 2025 2 checks passed
-
eleted the exported_pr_715840018 branch
-
force-pushed the exported_pr_715840018 branch 19 times, most recently from 7ed12b8 to 4c7bc36 Compare February 11, 2025 03:16
-
-
github.com github.com
-
It worked. Thank you very much.
-
tf.app.run is deprecated in TF 2.x, please use TF tf.compat.v1.app.run for TF 2.12.
-
-
github.com github.com
-
B-Step62 merged commit 4eba574 into mlflow:master Feb 3, 2025 44 of 47 checks passed
-
Hence, this PR removes the system prompt from input example.
-
s a result, when users build ChatModel / ChatAgent with those providers, they cannot log the model with a confusing error: with mlflow.start_run():
-
-
github.com github.com
-
1 check passed
1 check passed
-
deleted
-
Update DetermineArgumentLayoutsFromCompileOptions to not overwrite pa… …
-
Update DetermineArgumentLayoutsFromCompileOptions to not overwrite parameter & result memory spaces.
-
-
github.com github.com
-
deleted
deleted a branch
-
Mark tracing APIs as experimental …
-
Mark all user-facing tracing APIs as experimental.
-
-
github.com github.com
-
Reaching basic code quality prevents large merge conflicts, and allows for testing so PRs don’t break functionality. It also allows for a speedy pr review process. Until then, I’ll be maintaining an active fork with best practices for python development. Feel free to fork this code to make the idea work in your repo.
-
CI is red
-
I’ll start actively maintaining it after some (any) pr making scripts a module is merged. That’s a large part of the diff, and continuing to contribute to the repo is just not worth the effort without that change.
-
Now the system doesn't need to request the users response
-
The main client currently connects to the rabbitmq database but occasionally connection is being dropped. I'm having trouble to get the AI to use the commands in order for me to debug them. The qa client has been rewritten using Rich to provide a very nice UI for question answering, but I haven't tested it yet. I'm a bit bogged down at work. Still hope to work on it and be done this week, at least this weekend
-
Renamed scripts to autogpt. Absolute Imports. isort. Fixed flake8 F40… …
added boilerplate code
-
-
-
I thought I could use it for free. If that works on your side, that's good.
-
Hi @Gumichocopengin8 Did you brought any credits for your OPEN AI API? The above error happens when you dont have any credits in your api.
Identifed the issue
-
I did it too as well as OPENAI_API_KEY=xxx python code.py, but I don't think it matters. both didn't work
Implemented the proposed fix
-
Thanks for the report, and thanks @joelrobin18 for the PR. @Gumichocopengin8, does changing openai.ChatCompletion to openai.chat.completions fix the code sample for you?
check other version of openAI
-
-
github.com github.com
-
Thanks for your help. I could register the graph with MLflow 2.20. Please add this guidance to the documentation as well.
Version update
-
Could you try updating MLflow to 2.20 or newer? The dictionary parameter type is supported since 2.20.
check version
-
You can pass different thread ID (params) at runtime. The one passed to log_model is just an input "example" for MLflow to determine the input signature (type) for the model.
Discussion
-
You can use params to pass the configurable object including thread ID.
suggested a new parameter
-
-
github.com github.com
-
Copy-and-paste code to reproduce this:
Reproduced the bug
-
Okay, I found out that I need to specify the parameter for subsample to be something less than 1 to get the score.
Parameter not specified
-
-
github.com github.com
-
adrinjalali approved these changes
approved
-
I left a few comments, please fix the rest of the PR accordingly.
reviewed the code
-
Unit tests created to verify correct warning messages are raised upon usage of n_alphas parameter. Unit tests created to verify correct warning message if using the default value of alphas. All unit tests passing and all warnings suppressed on existing test cases using filter warnings.
Unit test added
-
Updated LinearModelCV and derived classes LassoCV, ElasticNetCV, MultiTaskElasticNetCV, MultiTaskLassoCV to remove n_alphas parameter from the constructor. alphas parameter is updated to support an integer or array-like argument. Functionality of n_alphas is preserved by passing an integer to alphas. Parameter_constraints updated accordingly.
changed the derived class parameters
-
-
-
requested a review from a team as a code owner
asked for code review
-
Codecov Report
verified the test case result
Tags
Annotators
URL
-
-
github.com github.com
-
Onnx Build Logs Iteration-1.txtcyyever commented on Mar 21, 2025 cyyeveron Mar 21, 2025 · edited by cyyeverEditsContributor@vijayaramaraju-kalidindi The situation is tricky in your case because the protobuf libraries are all static libraries. I need to know your Linux distribution.
Reviewing the solution
-
A quick workaround is to apt-get remove protobuf packages and build onnx from https://github.com/cyyever/onnx/tree/protobuf6 . This branch of onnx will download and compile protobuf 6.30.1 as dependency automatically, which matches the python's protobuf .
Removing the dependency
-
We must detect such cases and link to protobuf::libprotobuf, which should be a shared library.
Proposed a new branch
-
Thanks this worked. Successfully built ONNX.
Verified the Fix
-
The PR has been merged, you could try the main
Merged the PR after Successful retest
-
This error originates from a subprocess, and is likely not a problem with pip.
Error Root originated
-
-
github.com github.com
-
copybara-servicementioned this on Apr 15, 2025PR #91416: Fix C compatibility issue in TfLiteQuantizationType enum google-ai-edge/LiteRT#1736
Fixed C compatibility issue
-
Fix C compatibility issue in TfLiteQuantizationType enum #91416
Implemented the fix
-
No problems using TensorFlow Lite 2.18. This commit 977257e caused issue.
Root Cause Analysis
-
I closing this issue because this PR #91416 got merged, if you face any issue please feel free to post your comments if required I'll open this issue.
Merging and Closing the Issue
-
-
github.com github.com
-
google-ml-butlerassigned tilakrayal
Assigned to a user
-
Relevant log output
Found the problem in gpu_device_functions.h
-
Standalone code to reproduce the issue
Tried to reproduce the standalone Code
-
ploprestiadded a commit that references this issue
Commit to Fix the issue
-
-
github.com github.com
-
Fixes API Deprecate n_alphas in LinearModelCV #30616
Fixed the issue
-
adrinjalaliclosed this as completedin #30616
Approved & Merged
-
-
-
justinchubyclosed this as completed
Closed the issue
-
Could you test with the latest onnx-weekly package (you can install it from pip)? It may have been fixed
Experimental Build Tested
-
The contents of op_run.to_array_extended(t) and in 1.17.1 that of onnx.numpy_helper.to_array(t) as well may vary because it is actually an uninitialized array.
Root Cause Identified
-
-
github.com github.com
-
Add links to examples from the docstrings and user guides #26927
Improve the docs
-