180 Matching Annotations
  1. Last 7 days
    1. add a new Claude-based workflow for when dependabot opens a pr to have Claude review it. Base it on the claude.yml workflow and make sure to include the existing setup, just add a custom prompt. research the best way to do this with the claude github action and make it look up the change log for the dependobot for all the changed dependencies + check them for breaking changes + let us know if we're impacted
    1. The agent blocks are missing their input/output pins because the input_schema and output_schema properties are not being populated in the GraphMeta objects when flows are loaded. When these are undefined, the CustomNode component falls back to empty schemas {}, resulting in no pins being rendered.
    1. I also just ran into this issue after cloning from master a few hours ago; message_agent went over the limit once, after which subsequent calls also failed. Telling the system to delete and re-create the agent got it past the bottleneck. Maybe some way to restrict the history provided to sub-agents would work?
    2. Basically max is 8192 tokens in this context, lowering that will force it to split something less into chunks IE: def split_text(text: str, max_length: int = 4192) -> Generator[str, None, None]: basically would split anything above that I believe. It's linked into messages and other functs
    3. My issue is usually generated by the browse function so im changing this: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300)) To: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 2192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300))
    1. I then realized after looking into the docker container while the project is running, autogpt is in fact writing files to this directory /app/autogpt/workspace/auto_gpt_workspace . Though it's only accessible via the running docker container via Terminal. Though due to the nature of docker containers, as soon as you exit the running AutoGPT, you will lose any documents it creates. So it could be that running this project via docker has a particular issue moving the files back out whenever it completes a write to a file. I'm totally new to AutoGPT, I just set it up yesterday & I will try to investigate why this issue is happening.
    1. Thanks for the tip. I had to enable Virtual Maschine in BIOS to run the Docker now. (...)! I believe it worked! One strange thing though, as you can see: it first states it cant find the file, then proceeds to read the output of the fily anyway -meaning it found the file-: Executing file 'generate_dinner_recipe.py' in workspace 'auto_gpt_workspace' [2023-04-07T03:22:43.847792900Z][docker-credential-desktop.EXE][W] Windows version might not be up-to-date: The system cannot find the file specified. SYSTEM: Command execute_python_file returned: (...) BUT, I can now read from executing files, which feels amazing, like this was a big step and THANK you
  2. Sep 2025
    1. I also think is that, I find that plugins_config (at line 178 Auto-GPT-stable/autogpt/plugins/init.py) is always empty, no matter if is configured "correctly", which is not clear how they need to properly be defined considering every plugin developer use a different name for the plugins (ones with dashes, others without, etc..) besides some are not yet implementing the template plugin class. but also in what I read say that this is not yet strictly necessary. They talk about a "as long as they are in the correct (NEW) format" but it is not clear either what this is.
    1. Whenever it has completed auto-tasks (I usually do tasks in blocks of 50 (y -50) to (y -200). You can type in a message to it instead of typing y -xx or n (to exit). It will say it doesn't understand but typically "fixes" itself and sometimes will accept what you've written.
    1. Conv3DTranspose_class = tf.keras.layers.Conv3DTranspose(filters, kernel_size, strides=strides, padding=padding, output_padding=output_padding, data_format=data_format, dilation_rate=dilation_rate, activation=activation, use_bias=use_bias, kernel_initializer=kernel_initializer, bias_initializer=bias_initializer, kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer, activity_regularizer=activity_regularizer, kernel_constraint=kernel_constraint, bias_constraint=bias_constraint) layer = Conv3DTranspose_class inputs = __input___0 with tf.GradientTape() as g:
  3. Aug 2025
    1. Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
    2. import tensorflow as tf import tensorflow_hub as hub mobilenet_v2 = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4" inception_v3 = "https://tfhub.dev/google/imagenet/inception_v3/classification/5" classifier_model = mobilenet_v2 # @param ["mobilenet_v2", "inception_v3"] {type:"raw"} IMAGE_SHAPE = (224, 224) classifier = tf.keras.Sequential([ hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE + (3,)) ]) link to notebook: "https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"
    3. When attempting to add hub.KerasLayer to a tf.keras.Sequential model, TensorFlow raises a ValueError, stating that only instances of keras.Layer can be added. However, hub.KerasLayer is a subclass of keras.Layer, so this behavior seems unexpected. I expected hub.KerasLayer to be accepted as a valid layer in the tf.keras.Sequential model, as per the TensorFlow documentation.
  4. Jun 2025
    1. The Tensorflow team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TensorFlow version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
  5. May 2025
    1. The area which corresponds to the difference in the lite versions are actually different in the h5 files. Furthermore, the dimensionality of some elements have changed from the start to the red arrow and downwards. Can either of you try again when the model architecture is identical prior to conversion?
    2. t is not clear how you are getting different results on quantization perhaps you can explain more. Are you observing incorrect results with tf.compat.v1.lite.TFLiteConverter.from_keras_model_file and correct results with tf.lite.TFLiteConverter.from_keras_model ?
    1. Reaching basic code quality prevents large merge conflicts, and allows for testing so PRs don’t break functionality. It also allows for a speedy pr review process. Until then, I’ll be maintaining an active fork with best practices for python development. Feel free to fork this code to make the idea work in your repo.
    2. The main client currently connects to the rabbitmq database but occasionally connection is being dropped. I'm having trouble to get the AI to use the commands in order for me to debug them. The qa client has been rewritten using Rich to provide a very nice UI for question answering, but I haven't tested it yet. I'm a bit bogged down at work. Still hope to work on it and be done this week, at least this weekend
    1. Unit tests created to verify correct warning messages are raised upon usage of n_alphas parameter. Unit tests created to verify correct warning message if using the default value of alphas. All unit tests passing and all warnings suppressed on existing test cases using filter warnings.

      Unit test added

    2. Updated LinearModelCV and derived classes LassoCV, ElasticNetCV, MultiTaskElasticNetCV, MultiTaskLassoCV to remove n_alphas parameter from the constructor. alphas parameter is updated to support an integer or array-like argument. Functionality of n_alphas is preserved by passing an integer to alphas. Parameter_constraints updated accordingly.

      changed the derived class parameters

    1. A quick workaround is to apt-get remove protobuf packages and build onnx from https://github.com/cyyever/onnx/tree/protobuf6 . This branch of onnx will download and compile protobuf 6.30.1 as dependency automatically, which matches the python's protobuf .

      Removing the dependency