184 Matching Annotations
  1. Oct 2025
    1. I also just ran into this issue after cloning from master a few hours ago; message_agent went over the limit once, after which subsequent calls also failed. Telling the system to delete and re-create the agent got it past the bottleneck. Maybe some way to restrict the history provided to sub-agents would work?
    2. Basically max is 8192 tokens in this context, lowering that will force it to split something less into chunks IE: def split_text(text: str, max_length: int = 4192) -> Generator[str, None, None]: basically would split anything above that I believe. It's linked into messages and other functs
    3. My issue is usually generated by the browse function so im changing this: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300)) To: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 2192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300))
    1. Same problem can be found in tf.keras.layers.ZeroPadding3D. Here is the repo code: padding = 16 data_format = None __input___0_tensor = tf.random.uniform([1, 1, 2, 2, 3], minval=0.8510533546655319,maxval=3.0,dtype=tf.float64) __input___0 = tf.identity(__input___0_tensor) ZeroPadding3D_class = tf.keras.layers.ZeroPadding3D(padding=padding,data_format=data_format) layer = ZeroPadding3D_class inputs = __input___0 with tf.GradientTape() as g: g.watch(inputs) res = layer(inputs) print(res.shape) grad = g.jacobian(res, inputs) print(grad)
    2. It appears that the number of filters does make a difference 😂. In your gist, where the number of filters is set to 40, there are no crashes. However, I repo above code in colab, in the provided gist, where the number of filters is increased to 1792, it crashes, and the error message suggests a potential OOM issue.
    3. I have tested the given code on colab and its working fine.Please refer attached gist. Please note that I have reduced the no of filters due to Memory constraints but it should not affect the reported behaviour. Could you please verify the behaviour attached. Can you confirm whether the issue with Windows Package as it will download intel package?
    1. Thanks for the tip. I had to enable Virtual Maschine in BIOS to run the Docker now. (...)! I believe it worked! One strange thing though, as you can see: it first states it cant find the file, then proceeds to read the output of the fily anyway -meaning it found the file-: Executing file 'generate_dinner_recipe.py' in workspace 'auto_gpt_workspace' [2023-04-07T03:22:43.847792900Z][docker-credential-desktop.EXE][W] Windows version might not be up-to-date: The system cannot find the file specified. SYSTEM: Command execute_python_file returned: (...) BUT, I can now read from executing files, which feels amazing, like this was a big step and THANK you
    1. """Handles loading of plugins.""" import importlib.util import inspect import json import os import sys import zipfile from pathlib import Path from typing import List from urllib.parse import urlparse from zipimport import zipimporter import yaml import openapi_python_client import requests from auto_gpt_plugin_template import AutoGPTPluginTemplate from openapi_python_client.config import Config as OpenAPIConfig from autogpt.config.config import Config from autogpt.logs import logger from autogpt.models.base_open_ai_plugin import BaseOpenAIPlugin DEFAULT_PLUGINS_CONFIG_FILE = os.path.join( os.path.dirname(os.path.abspath(file)), "..", "..", "plugins_config.yaml" ) class PluginConfig: def init(self, plugin_dict): self.plugin_dict = plugin_dict def is_enabled(self, plugin_name): return self.plugin_dict.get(plugin_name, {}).get('enabled', False) with open(DEFAULT_PLUGINS_CONFIG_FILE, "r") as file: plugins_dict = yaml.safe_load(file) plugins_config = PluginConfig(plugins_dict) def scan_plugins(config: Config, debug: bool = False) -> List[AutoGPTPluginTemplate]: """Scan the plugins directory for plugins and loads them. Args: config (Config): Config instance including plugins config debug (bool, optional): Enable debug logging. Defaults to False. Returns: List[Tuple[str, Path]]: List of plugins. """ loaded_plugins = [] plugins_path_path = Path(config.plugins_dir) # Directory-based plugins for plugin_path in [f.path for f in os.scandir(config.plugins_dir) if f.is_dir()]: if plugin_path.startswith("__"): # Avoid going into __pycache__ or other hidden directories continue plugin_module_path = plugin_path.split(os.path.sep) plugin_module_name = plugin_module_path[-1] qualified_module_name = ".".join(plugin_module_path) __import__(qualified_module_name) plugin = sys.modules[qualified_module_name] if not plugins_config.is_enabled(plugin_module_name): logger.warn(f"Plugin {plugin_module_name} found but not configured") continue for _, class_obj in inspect.getmembers(plugin): if hasattr(class_obj, "_abc_impl") and AutoGPTPluginTemplate in class_obj.__bases__: loaded_plugins.append(class_obj()) return loaded_plugins def inspect_zip_for_modules(zip_path: str, debug: bool = False) -> list[str]: """ Inspect a zipfile for a modules. Args: zip_path (str): Path to the zipfile. debug (bool, optional): Enable debug logging. Defaults to False. Returns: list[str]: The list of module names found or empty list if none were found. """ result = [] with zipfile.ZipFile(zip_path, "r") as zfile: for name in zfile.namelist(): if name.endswith("__init__.py") and not name.startswith("__MACOSX"): logger.debug(f"Found module '{name}' in the zipfile at: {name}") result.append(name) if len(result) == 0: logger.debug(f"Module '__init__.py' not found in the zipfile @ {zip_path}.") return result def write_dict_to_json_file(data: dict, file_path: str) -> None: """ Write a dictionary to a JSON file. Args: data (dict): Dictionary to write. file_path (str): Path to the file. """ with open(file_path, "w") as file: json.dump(data, file, indent=4) def fetch_openai_plugins_manifest_and_spec(config: Config) -> dict: """ Fetch the manifest for a list of OpenAI plugins. Args: urls (List): List of URLs to fetch. Returns: dict: per url dictionary of manifest and spec. """ # TODO add directory scan manifests = {} for url in config.plugins_openai: openai_plugin_client_dir = f"{config.plugins_dir}/openai/{urlparse(url).netloc}" create_directory_if_not_exists(openai_plugin_client_dir) if not os.path.exists(f"{openai_plugin_client_dir}/ai-plugin.json"): try: response = requests.get(f"{url}/.well-known/ai-plugin.json") if response.status_code == 200: manifest = response.json() if manifest["schema_version"] != "v1": logger.warn( f"Unsupported manifest version: {manifest['schem_version']} for {url}" ) continue if manifest["api"]["type"] != "openapi": logger.warn( f"Unsupported API type: {manifest['api']['type']} for {url}" ) continue write_dict_to_json_file( manifest, f"{openai_plugin_client_dir}/ai-plugin.json" ) else: logger.warn( f"Failed to fetch manifest for {url}: {response.status_code}" ) except requests.exceptions.RequestException as e: logger.warn(f"Error while requesting manifest from {url}: {e}") else: logger.info(f"Manifest for {url} already exists") manifest = json.load(open(f"{openai_plugin_client_dir}/ai-plugin.json")) if not os.path.exists(f"{openai_plugin_client_dir}/openapi.json"): openapi_spec = openapi_python_client._get_document( url=manifest["api"]["url"], path=None, timeout=5 ) write_dict_to_json_file( openapi_spec, f"{openai_plugin_client_dir}/openapi.json" ) else: logger.info(f"OpenAPI spec for {url} already exists") openapi_spec = json.load(open(f"{openai_plugin_client_dir}/openapi.json")) manifests[url] = {"manifest": manifest, "openapi_spec": openapi_spec} return manifests def create_directory_if_not_exists(directory_path: str) -> bool: """ Create a directory if it does not exist. Args: directory_path (str): Path to the directory. Returns: bool: True if the directory was created, else False. """ if not os.path.exists(directory_path): try: os.makedirs(directory_path) logger.debug(f"Created directory: {directory_path}") return True except OSError as e: logger.warn(f"Error creating directory {directory_path}: {e}") return False else: logger.info(f"Directory {directory_path} already exists") return True def initialize_openai_plugins( manifests_specs: dict, config: Config, debug: bool = False ) -> dict: """ Initialize OpenAI plugins. Args: manifests_specs (dict): per url dictionary of manifest and spec. config (Config): Config instance including plugins config debug (bool, optional): Enable debug logging. Defaults to False. Returns: dict: per url dictionary of manifest, spec and client. """ openai_plugins_dir = f"{config.plugins_dir}/openai" if create_directory_if_not_exists(openai_plugins_dir): for url, manifest_spec in manifests_specs.items(): openai_plugin_client_dir = f"{openai_plugins_dir}/{urlparse(url).hostname}" _meta_option = (openapi_python_client.MetaType.SETUP,) _config = OpenAPIConfig( **{ "project_name_override": "client", "package_name_override": "client", } ) prev_cwd = Path.cwd() os.chdir(openai_plugin_client_dir) if not os.path.exists("client"): client_results = openapi_python_client.create_new_client( url=manifest_spec["manifest"]["api"]["url"], path=None, meta=_meta_option, config=_config, ) if client_results: logger.warn( f"Error creating OpenAPI client: {client_results[0].header} \n" f" details: {client_results[0].detail}" ) continue spec = importlib.util.spec_from_file_location( "client", "client/client/client.py" ) module = importlib.util.module_from_spec(spec) try: spec.loader.exec_module(module) finally: os.chdir(prev_cwd) client = module.Client(base_url=url) manifest_spec["client"] = client return manifests_specs def instantiate_openai_plugin_clients( manifests_specs_clients: dict, config: Config, debug: bool = False ) -> dict: """ Instantiates BaseOpenAIPlugin instances for each OpenAI plugin. Args: manifests_specs_clients (dict): per url dictionary of manifest, spec and client. config (Config): Config instance including plugins config debug (bool, optional): Enable debug logging. Defaults to False. Returns: plugins (dict): per url dictionary of BaseOpenAIPlugin instances. """ plugins = {} for url, manifest_spec_client in manifests_specs_clients.items(): plugins[url] = BaseOpenAIPlugin(manifest_spec_client) return plugins
    1. UPDATE: Now it uses pinecone. I had previously typed PINECONE in upper case letters. That was the problem. Now the line in my .env file looks like this: MEMORY_BACKEND=pinecone It crashes on the first run because it takes some time for the Pinecone index to initialize. That´s normal. Still testing if it actually codes something now... UPDATE_2: Now it wants to install a code editor like pycharm :-) Never seen it try something like this. I´ve tried to give it human feedback: you dont need a code editor. just use the write_to_file function. Sadly that confused the ai.....React
    2. Are you referring to this? self.redis_host = os.getenv("REDIS_HOST", "localhost") If so, should it look like this to use pinecone? self.redis_host = os.getenv("REDIS_HOST", "pinecone") Or is it this: self.memory_backend = os.getenv("MEMORY_BACKEND", 'local') and should be this: self.memory_backend = os.getenv("MEMORY_BACKEND", 'pinecone') I'm no coder so forgive my ignorance. :( I had already pasted the API for Pinecone but it has yet to use it and nothing is being properly written into the files upon completion of tasks. :(
    1. add a new Claude-based workflow for when dependabot opens a pr to have Claude review it. Base it on the claude.yml workflow and make sure to include the existing setup, just add a custom prompt. research the best way to do this with the claude github action and make it look up the change log for the dependobot for all the changed dependencies + check them for breaking changes + let us know if we're impacted
    1. The agent blocks are missing their input/output pins because the input_schema and output_schema properties are not being populated in the GraphMeta objects when flows are loaded. When these are undefined, the CustomNode component falls back to empty schemas {}, resulting in no pins being rendered.
    1. I then realized after looking into the docker container while the project is running, autogpt is in fact writing files to this directory /app/autogpt/workspace/auto_gpt_workspace . Though it's only accessible via the running docker container via Terminal. Though due to the nature of docker containers, as soon as you exit the running AutoGPT, you will lose any documents it creates. So it could be that running this project via docker has a particular issue moving the files back out whenever it completes a write to a file. I'm totally new to AutoGPT, I just set it up yesterday & I will try to investigate why this issue is happening.
  2. Sep 2025
    1. I also think is that, I find that plugins_config (at line 178 Auto-GPT-stable/autogpt/plugins/init.py) is always empty, no matter if is configured "correctly", which is not clear how they need to properly be defined considering every plugin developer use a different name for the plugins (ones with dashes, others without, etc..) besides some are not yet implementing the template plugin class. but also in what I read say that this is not yet strictly necessary. They talk about a "as long as they are in the correct (NEW) format" but it is not clear either what this is.
    1. Whenever it has completed auto-tasks (I usually do tasks in blocks of 50 (y -50) to (y -200). You can type in a message to it instead of typing y -xx or n (to exit). It will say it doesn't understand but typically "fixes" itself and sometimes will accept what you've written.
  3. Aug 2025
    1. Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
    2. When attempting to add hub.KerasLayer to a tf.keras.Sequential model, TensorFlow raises a ValueError, stating that only instances of keras.Layer can be added. However, hub.KerasLayer is a subclass of keras.Layer, so this behavior seems unexpected. I expected hub.KerasLayer to be accepted as a valid layer in the tf.keras.Sequential model, as per the TensorFlow documentation.
  4. Jun 2025
    1. The Tensorflow team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TensorFlow version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
  5. May 2025
    1. The area which corresponds to the difference in the lite versions are actually different in the h5 files. Furthermore, the dimensionality of some elements have changed from the start to the red arrow and downwards. Can either of you try again when the model architecture is identical prior to conversion?
    2. t is not clear how you are getting different results on quantization perhaps you can explain more. Are you observing incorrect results with tf.compat.v1.lite.TFLiteConverter.from_keras_model_file and correct results with tf.lite.TFLiteConverter.from_keras_model ?
    1. Reaching basic code quality prevents large merge conflicts, and allows for testing so PRs don’t break functionality. It also allows for a speedy pr review process. Until then, I’ll be maintaining an active fork with best practices for python development. Feel free to fork this code to make the idea work in your repo.
    2. The main client currently connects to the rabbitmq database but occasionally connection is being dropped. I'm having trouble to get the AI to use the commands in order for me to debug them. The qa client has been rewritten using Rich to provide a very nice UI for question answering, but I haven't tested it yet. I'm a bit bogged down at work. Still hope to work on it and be done this week, at least this weekend
    1. Unit tests created to verify correct warning messages are raised upon usage of n_alphas parameter. Unit tests created to verify correct warning message if using the default value of alphas. All unit tests passing and all warnings suppressed on existing test cases using filter warnings.

      Unit test added

    2. Updated LinearModelCV and derived classes LassoCV, ElasticNetCV, MultiTaskElasticNetCV, MultiTaskLassoCV to remove n_alphas parameter from the constructor. alphas parameter is updated to support an integer or array-like argument. Functionality of n_alphas is preserved by passing an integer to alphas. Parameter_constraints updated accordingly.

      changed the derived class parameters

    1. A quick workaround is to apt-get remove protobuf packages and build onnx from https://github.com/cyyever/onnx/tree/protobuf6 . This branch of onnx will download and compile protobuf 6.30.1 as dependency automatically, which matches the python's protobuf .

      Removing the dependency