The issue has been resolved. Installing tf-keras and using keras from it instead of tf.keras fixed the problem. Thank you!
- Oct 2025
-
github.com github.com
-
-
The solution where I installed tf_keras worked for that section, but I’m encountering a similar error in the "Attach a classification head" section of the same notebook. However, the previous solution does not seem to work in this case.
-
-
github.com github.com
-
fix list …
-
fix langchain callback injection
-
LGTM!
-
Do we have some source code reference that explains why does * operator works for CallbackManager instance?
-
Can we use with mlflow.start_run?
-
-
github.com github.com
-
Could you try updating MLflow to 2.20 or newer? The dictionary parameter type is supported since 2.20.
check version
-
I tried to apply the code you suggested but it's not working well:
-
You can pass different thread ID (params) at runtime. The one passed to log_model is just an input "example" for MLflow to determine the input signature (type) for the model.
Discussion
-
You can use params to pass the configurable object including thread ID.
suggested a new parameter
-
-
github.com github.com
-
Thanks so much, I have been struggling with this for a long time!
-
https://www.tensorflow.org/text/tutorials/text_generation document is failing in the Tensorflow v2.17 which contains Keras3.0. Modified the code to the Keras2.0.
-
-
github.com github.com
-
Maybe try changing this line in autogpt/processing/text.py ? def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: honestly, I'm still checking to see if that'd be it, but doubtful lol
-
I also just ran into this issue after cloning from master a few hours ago; message_agent went over the limit once, after which subsequent calls also failed. Telling the system to delete and re-create the agent got it past the bottleneck. Maybe some way to restrict the history provided to sub-agents would work?
-
Basically max is 8192 tokens in this context, lowering that will force it to split something less into chunks IE: def split_text(text: str, max_length: int = 4192) -> Generator[str, None, None]: basically would split anything above that I believe. It's linked into messages and other functs
-
Should be fixed in #2542 just now. Please pull master, check that your .env is up to date with .env.template, try again, and let us know if it's still broken for you.
-
My issue is usually generated by the browse function so im changing this: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300)) To: self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 2192)) self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300))
-
'm running into same, we need to limit certain chunks I think. Should be able to change chunk size to fix, not sure if that'll fix the total token amount, but we can make the summaries smaller too.
-
-
github.com github.com
-
Hi @cheyennee , OOM errors are not problem with Tensorflow and with low number of filters it is indeed working. Oh Sorry,I missed this from your logs.
-
Same problem can be found in tf.keras.layers.ZeroPadding3D. Here is the repo code: padding = 16 data_format = None __input___0_tensor = tf.random.uniform([1, 1, 2, 2, 3], minval=0.8510533546655319,maxval=3.0,dtype=tf.float64) __input___0 = tf.identity(__input___0_tensor) ZeroPadding3D_class = tf.keras.layers.ZeroPadding3D(padding=padding,data_format=data_format) layer = ZeroPadding3D_class inputs = __input___0 with tf.GradientTape() as g: g.watch(inputs) res = layer(inputs) print(res.shape) grad = g.jacobian(res, inputs) print(grad)
-
It appears that the number of filters does make a difference 😂. In your gist, where the number of filters is set to 40, there are no crashes. However, I repo above code in colab, in the provided gist, where the number of filters is increased to 1792, it crashes, and the error message suggests a potential OOM issue.
-
I have tested the given code on colab and its working fine.Please refer attached gist. Please note that I have reduced the no of filters due to Memory constraints but it should not affect the reported behaviour. Could you please verify the behaviour attached. Can you confirm whether the issue with Windows Package as it will download intel package?
-
-
github.com github.com
-
Fix NVCC+Clang build failure. …
-
"nvcc --compiler-bindir /path/to/clang" sets __clang__ while compiling CUDA code. This causes gpu_device_functions.h to think it is being compiled with Clang and try to use a Clang-specific function.
-
approved
-
-
github.com github.com
-
Make sure docker is installed and you have permission to run docker commands docker run hello-world
-
I am running in the same issue
-
Thanks for the tip. I had to enable Virtual Maschine in BIOS to run the Docker now. (...)! I believe it worked! One strange thing though, as you can see: it first states it cant find the file, then proceeds to read the output of the fily anyway -meaning it found the file-: Executing file 'generate_dinner_recipe.py' in workspace 'auto_gpt_workspace' [2023-04-07T03:22:43.847792900Z][docker-credential-desktop.EXE][W] Windows version might not be up-to-date: The system cannot find the file specified. SYSTEM: Command execute_python_file returned: (...) BUT, I can now read from executing files, which feels amazing, like this was a big step and THANK you
-
-
github.com github.com
-
"""Handles loading of plugins.""" import importlib.util import inspect import json import os import sys import zipfile from pathlib import Path from typing import List from urllib.parse import urlparse from zipimport import zipimporter import yaml import openapi_python_client import requests from auto_gpt_plugin_template import AutoGPTPluginTemplate from openapi_python_client.config import Config as OpenAPIConfig from autogpt.config.config import Config from autogpt.logs import logger from autogpt.models.base_open_ai_plugin import BaseOpenAIPlugin DEFAULT_PLUGINS_CONFIG_FILE = os.path.join( os.path.dirname(os.path.abspath(file)), "..", "..", "plugins_config.yaml" ) class PluginConfig: def init(self, plugin_dict): self.plugin_dict = plugin_dict def is_enabled(self, plugin_name): return self.plugin_dict.get(plugin_name, {}).get('enabled', False) with open(DEFAULT_PLUGINS_CONFIG_FILE, "r") as file: plugins_dict = yaml.safe_load(file) plugins_config = PluginConfig(plugins_dict) def scan_plugins(config: Config, debug: bool = False) -> List[AutoGPTPluginTemplate]: """Scan the plugins directory for plugins and loads them. Args: config (Config): Config instance including plugins config debug (bool, optional): Enable debug logging. Defaults to False. Returns: List[Tuple[str, Path]]: List of plugins. """ loaded_plugins = [] plugins_path_path = Path(config.plugins_dir) # Directory-based plugins for plugin_path in [f.path for f in os.scandir(config.plugins_dir) if f.is_dir()]: if plugin_path.startswith("__"): # Avoid going into __pycache__ or other hidden directories continue plugin_module_path = plugin_path.split(os.path.sep) plugin_module_name = plugin_module_path[-1] qualified_module_name = ".".join(plugin_module_path) __import__(qualified_module_name) plugin = sys.modules[qualified_module_name] if not plugins_config.is_enabled(plugin_module_name): logger.warn(f"Plugin {plugin_module_name} found but not configured") continue for _, class_obj in inspect.getmembers(plugin): if hasattr(class_obj, "_abc_impl") and AutoGPTPluginTemplate in class_obj.__bases__: loaded_plugins.append(class_obj()) return loaded_plugins def inspect_zip_for_modules(zip_path: str, debug: bool = False) -> list[str]: """ Inspect a zipfile for a modules. Args: zip_path (str): Path to the zipfile. debug (bool, optional): Enable debug logging. Defaults to False. Returns: list[str]: The list of module names found or empty list if none were found. """ result = [] with zipfile.ZipFile(zip_path, "r") as zfile: for name in zfile.namelist(): if name.endswith("__init__.py") and not name.startswith("__MACOSX"): logger.debug(f"Found module '{name}' in the zipfile at: {name}") result.append(name) if len(result) == 0: logger.debug(f"Module '__init__.py' not found in the zipfile @ {zip_path}.") return result def write_dict_to_json_file(data: dict, file_path: str) -> None: """ Write a dictionary to a JSON file. Args: data (dict): Dictionary to write. file_path (str): Path to the file. """ with open(file_path, "w") as file: json.dump(data, file, indent=4) def fetch_openai_plugins_manifest_and_spec(config: Config) -> dict: """ Fetch the manifest for a list of OpenAI plugins. Args: urls (List): List of URLs to fetch. Returns: dict: per url dictionary of manifest and spec. """ # TODO add directory scan manifests = {} for url in config.plugins_openai: openai_plugin_client_dir = f"{config.plugins_dir}/openai/{urlparse(url).netloc}" create_directory_if_not_exists(openai_plugin_client_dir) if not os.path.exists(f"{openai_plugin_client_dir}/ai-plugin.json"): try: response = requests.get(f"{url}/.well-known/ai-plugin.json") if response.status_code == 200: manifest = response.json() if manifest["schema_version"] != "v1": logger.warn( f"Unsupported manifest version: {manifest['schem_version']} for {url}" ) continue if manifest["api"]["type"] != "openapi": logger.warn( f"Unsupported API type: {manifest['api']['type']} for {url}" ) continue write_dict_to_json_file( manifest, f"{openai_plugin_client_dir}/ai-plugin.json" ) else: logger.warn( f"Failed to fetch manifest for {url}: {response.status_code}" ) except requests.exceptions.RequestException as e: logger.warn(f"Error while requesting manifest from {url}: {e}") else: logger.info(f"Manifest for {url} already exists") manifest = json.load(open(f"{openai_plugin_client_dir}/ai-plugin.json")) if not os.path.exists(f"{openai_plugin_client_dir}/openapi.json"): openapi_spec = openapi_python_client._get_document( url=manifest["api"]["url"], path=None, timeout=5 ) write_dict_to_json_file( openapi_spec, f"{openai_plugin_client_dir}/openapi.json" ) else: logger.info(f"OpenAPI spec for {url} already exists") openapi_spec = json.load(open(f"{openai_plugin_client_dir}/openapi.json")) manifests[url] = {"manifest": manifest, "openapi_spec": openapi_spec} return manifests def create_directory_if_not_exists(directory_path: str) -> bool: """ Create a directory if it does not exist. Args: directory_path (str): Path to the directory. Returns: bool: True if the directory was created, else False. """ if not os.path.exists(directory_path): try: os.makedirs(directory_path) logger.debug(f"Created directory: {directory_path}") return True except OSError as e: logger.warn(f"Error creating directory {directory_path}: {e}") return False else: logger.info(f"Directory {directory_path} already exists") return True def initialize_openai_plugins( manifests_specs: dict, config: Config, debug: bool = False ) -> dict: """ Initialize OpenAI plugins. Args: manifests_specs (dict): per url dictionary of manifest and spec. config (Config): Config instance including plugins config debug (bool, optional): Enable debug logging. Defaults to False. Returns: dict: per url dictionary of manifest, spec and client. """ openai_plugins_dir = f"{config.plugins_dir}/openai" if create_directory_if_not_exists(openai_plugins_dir): for url, manifest_spec in manifests_specs.items(): openai_plugin_client_dir = f"{openai_plugins_dir}/{urlparse(url).hostname}" _meta_option = (openapi_python_client.MetaType.SETUP,) _config = OpenAPIConfig( **{ "project_name_override": "client", "package_name_override": "client", } ) prev_cwd = Path.cwd() os.chdir(openai_plugin_client_dir) if not os.path.exists("client"): client_results = openapi_python_client.create_new_client( url=manifest_spec["manifest"]["api"]["url"], path=None, meta=_meta_option, config=_config, ) if client_results: logger.warn( f"Error creating OpenAPI client: {client_results[0].header} \n" f" details: {client_results[0].detail}" ) continue spec = importlib.util.spec_from_file_location( "client", "client/client/client.py" ) module = importlib.util.module_from_spec(spec) try: spec.loader.exec_module(module) finally: os.chdir(prev_cwd) client = module.Client(base_url=url) manifest_spec["client"] = client return manifests_specs def instantiate_openai_plugin_clients( manifests_specs_clients: dict, config: Config, debug: bool = False ) -> dict: """ Instantiates BaseOpenAIPlugin instances for each OpenAI plugin. Args: manifests_specs_clients (dict): per url dictionary of manifest, spec and client. config (Config): Config instance including plugins config debug (bool, optional): Enable debug logging. Defaults to False. Returns: plugins (dict): per url dictionary of BaseOpenAIPlugin instances. """ plugins = {} for url, manifest_spec_client in manifests_specs_clients.items(): plugins[url] = BaseOpenAIPlugin(manifest_spec_client) return plugins
-
i had the same problam
-
-
github.com github.com
-
UPDATE: Now it uses pinecone. I had previously typed PINECONE in upper case letters. That was the problem. Now the line in my .env file looks like this: MEMORY_BACKEND=pinecone It crashes on the first run because it takes some time for the Pinecone index to initialize. That´s normal. Still testing if it actually codes something now... UPDATE_2: Now it wants to install a code editor like pycharm :-) Never seen it try something like this. I´ve tried to give it human feedback: you dont need a code editor. just use the write_to_file function. Sadly that confused the ai.....React
-
Are you referring to this? self.redis_host = os.getenv("REDIS_HOST", "localhost") If so, should it look like this to use pinecone? self.redis_host = os.getenv("REDIS_HOST", "pinecone") Or is it this: self.memory_backend = os.getenv("MEMORY_BACKEND", 'local') and should be this: self.memory_backend = os.getenv("MEMORY_BACKEND", 'pinecone') I'm no coder so forgive my ignorance. :( I had already pasted the API for Pinecone but it has yet to use it and nothing is being properly written into the files upon completion of tasks. :(
-
same problem here. i put my pinecone key and region in the .env file but autogpt only uses localmemory and never writes anything to a file. It also seems to forget what it has already researched after a few steps.
-
-
github.com github.com
-
add a new Claude-based workflow for when dependabot opens a pr to have Claude review it. Base it on the claude.yml workflow and make sure to include the existing setup, just add a custom prompt. research the best way to do this with the claude github action and make it look up the change log for the dependobot for all the changed dependencies + check them for breaking changes + let us know if we're impacted
-
Additional Observations:
-
Correct Approach
-
Recommended Fix:
-
Findings:
-
-
github.com github.com
-
The agent blocks are missing their input/output pins because the input_schema and output_schema properties are not being populated in the GraphMeta objects when flows are loaded. When these are undefined, the CustomNode component falls back to empty schemas {}, resulting in no pins being rendered.
-
When rendered in CustomNode.tsx (lines 132-137), agent blocks replace their schema with the hardcoded values:
-
The Likely Cause:
-
Root Cause Identified
-
-
github.com github.com
-
The Fix Applied
-
Root Cause Analysis
-
Successfully fixed the TypeError that occurred when the DataForSEO API returns an unexpected response structure where items could be None.
-
Added a null check in autogpt_platform/backend/backend/blocks/dataforseo/related_keywords.py to ensure items is never None before iterating Verified that existing tests still pass after the fix
-
-
github.com github.com
-
Here is how you can check if there is some bottleneck on your machine: Instead of running ./run.bat (or ./run.sh) you can run: python -m cProfile -o profile.pickle -m autogpt
-
-
github.com github.com
-
This is a prompting issue and a limitation of LLMs
-
I just deleted the three lines in autogpt/promt.py. Maybe not the nicest solution, but works so far.
-
Could make do_nothing a lower variablity maybe?
-
-
github.com github.com
-
I then realized after looking into the docker container while the project is running, autogpt is in fact writing files to this directory /app/autogpt/workspace/auto_gpt_workspace . Though it's only accessible via the running docker container via Terminal. Though due to the nature of docker containers, as soon as you exit the running AutoGPT, you will lose any documents it creates. So it could be that running this project via docker has a particular issue moving the files back out whenever it completes a write to a file. I'm totally new to AutoGPT, I just set it up yesterday & I will try to investigate why this issue is happening.
-
Results are not written to a file (disappointing ongoing issue) #3583
-
-
github.com github.com
-
After changing the docker-compose volume, it worked
-
I'm also running into this problem. I've confirmed that writing to the workspace within Docker yields the expected result, so I know it isn't a problem with my use of Docker
-
Looks like you're using Docker? If so, this worked for me: create auto-gpt.json in project root. mount auto-gpt.json with docker run command; e.g.:
-
Maybe you can try again with: o EXECUTE_LOCAL_COMMANDS=false o RESTRICT_TO_WORKSPACE=true See if the file is written to the folder " auto_gpt_workspace folde
-
- Sep 2025
-
github.com github.com
-
Hehe, just remove the software - no problem then. Doesn't quite fix anything. If looking for a fix, you could check out my previous comment.
-
Its the plugins that need updating.
-
I also think is that, I find that plugins_config (at line 178 Auto-GPT-stable/autogpt/plugins/init.py) is always empty, no matter if is configured "correctly", which is not clear how they need to properly be defined considering every plugin developer use a different name for the plugins (ones with dashes, others without, etc..) besides some are not yet implementing the template plugin class. but also in what I read say that this is not yet strictly necessary. They talk about a "as long as they are in the correct (NEW) format" but it is not clear either what this is.
-
-
github.com github.com
-
This is only possible after each chunk of continuously executed steps. I almost never let it run continuously since it often goes wild or runs the same thing in loops without getting anything done.
-
Whenever it has completed auto-tasks (I usually do tasks in blocks of 50 (y -50) to (y -200). You can type in a message to it instead of typing y -xx or n (to exit). It will say it doesn't understand but typically "fixes" itself and sometimes will accept what you've written.
-
but i try pincone,it seems useless…i will try again
-
Try using something different than the local memory. I downloaded the code 5 days ago so I don't know if it has been changed but inside the config.py file in the scripts folder on line 75, the memory backend is hardcoded to local. Change local to pinecone and use a pinecone API key if you want.
-
Unable to write file to local. Always report json error. I have been trying for more than ten hours since yesterday. I wasted a lot of openAI tokens and a lot of my time. Please, can you solve this first? ?
-
-
github.com github.com
-
Describe the problem
-
-
github.com github.com
-
Both libraries use signals as a identifier which leads to a namespace collision.
-
#undef signals
-
Standalone code to reproduce the issue
-
Using MSVC 2022, but same error occurs while using LLVM/MSVC2019 compiler with my C++ application (in Qt 6.5.0)
-
-
github.com github.com
-
add test … 439cbd4
test case added
-
-
github.com github.com
-
OOM when allocating tensor with shape[50982027264] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node gradient_tape/UnsortedSegmentSum/pfor/UnsortedSegmentSum}}]]
-
It may be that the size is too large and the memory is overflowing.
-
- Aug 2025
-
github.com github.com
-
the gist notebook executed successfully however am still getting the error on this machine :
-
Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
-
Also I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here.
-
Hi, By default the colab notebook is using tensorflow v2.17 which contains keras3.0 which was causing the error. Could you please try to import keras2.0 with the below commands.
-
When attempting to add hub.KerasLayer to a tf.keras.Sequential model, TensorFlow raises a ValueError, stating that only instances of keras.Layer can be added. However, hub.KerasLayer is a subclass of keras.Layer, so this behavior seems unexpected. I expected hub.KerasLayer to be accepted as a valid layer in the tf.keras.Sequential model, as per the TensorFlow documentation.
-
- Jun 2025
-
github.com github.com
-
sklearn 1.6 installed with conda (above) sklearn 1.3 installed with conda sklearn 1.6 installed with pip
Cross Version Validation
-
-
github.com github.com
-
return
Fixed some miscalleneous issues after reviewing
-
-
-
ONNX 1.17.1 prints:
Cross validation
Tags
Annotators
URL
-
-
github.com github.com
-
This PR fixes a C compatibility issue in the TfLiteQuantizationType enum definition. The current definition uses C++ syntax (enum : int) which causes compilation errors when included in C projects. This PR adds conditional compilation directives to use C++ syntax only when compiling with C++.
-
-
github.com github.com
-
But I think it was a lot of gdb backtrace that leads me to the file.
-
Whenever FreezeSavedModel function is called in the code, tensorflow::ClientSession cannot execute properly. Things work absolutely fine if FreezeSavedModel function is commented out.
-
-
github.com github.com
-
. TF version: 1.13.1 &, bazel is 0.19.2. Platform:
-
. It seems that it is getting confused with the double quotes .
-
Facing the same issue here. My Os is Ubuntu 18.04.
-
-
github.com github.com
-
TOCO applies a set of optimizations to reduce the model size, improve inference speed, and ensure compatibility with the target platform.
-
we are converting them to reshapes so that we can use standard reshape optimization transforms"
-
The behavior you mentioned, where tf.squeeze() is converted to a reshape operator when using the TOCO (Tens
-
-
github.com github.com
-
Please use pip<24.1 if you need to use this version.
-
absl-py (<0.11>=0.10.0)
-
Ignoring version 0.3.1.dev202105110329 of tflite-model-maker-nightly since it has invalid metadata:
-
I'm able to replicate the same behavior from my end
-
-
github.com github.com
-
Are you satisfied with the resolution of your issue?
-
Standalone code to reproduce the issue
-
-
github.com github.com
-
an m1 mac running big sur, homebrew python 3.8.12, pip version 21.3, using the tensorflow_macos virtual environment
-
Taking @alfaro96's comment as a hint, I tried another pip install but with Python 3.8, and it worked. Hope this helps other people.
-
No module named pip
-
pip install --pre --extra-index https://pypi.anaconda.org/scipy-wheels-nightly/simple scikit-learn " this works too if you are on 3.9
-
We have not released a version supporting Python 3.9 yet in PyPi.
-
-
github.com github.com
-
https://reviews.llvm.org/D81045 should help here.
-
That's right - that would be the minimal form - the copy has nothing to do with this issue. Not a high priority issue, but would be good to fail verification in such cases.
validated
-
So the issue is with the custom terminator which isn't strict about the possible parent operations? Then the minimal example is something like:
-
To reproduce: $ bazel-bin/tensorflow/compiler/mlir/tf-opt verify.mlir.
-
-
github.com github.com
-
but indeed we also can stop building/testing our python 3.13 wheels with numpy nightly, so updating that in #59819
config updated
-
Note that numpy 2.1.1 has Python 3.13 wheels and still has np._get_promotion_state so one option would be to switch to released numpy.
-
BUG: Remove np._get_promotion_state usage #59818
-
lestevementioned this on Sep 16, 2024⚠️ CI failed on Wheel builder (last failure: Sep 16, 2024) ⚠️ scikit-learn/scikit-learn#29852
-
AttributeError: module 'numpy' has no attribute '_get_promotion_state'
-
-
github.com github.com
-
This guild details the migration from Estimator to Keras APIs https://www.tensorflow.org/guide/migrate/migrating_estimator
Migration guidance and deprecation info shared
-
Faced the same issue with tensorflow 2.3.0 and tensorflow-hub 0.10.0. I just solved it by uninstalling tensorflow-estimator, tensorflow-hub and tensorflow, then installed tensorflow and tensorflow-hub. Now it's working :-)
-
reinstall tensor flow and tensor flow hub and it worked :-)
-
So they may change between versions. My guess is that for me it was an installation issue
-
tf.estimator.Estimator(model_fn)
-
-
github.com github.com
-
This was released as part of modelstore==0.0.75
-
! I can confirm that it works with the latest main
-
: I try exists(), if that fails with a ValueError, I try to list_blobs();
-
It is triggered when bucket.exists() is called,
-
I've managed to reproduce this error without modelstore.
-
-
github.com github.com
-
TF2.14For issues related to Tensorflow 2.14.xFor issues related to Tensorflow 2.14.xcomp:keras
-
-
github.com github.com
-
The Tensorflow team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TensorFlow version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
-
we require input indices to be unique. Otherwise, not only is the output non-deterministic on GPU, but gradients are broken on any device.
-
I was able to reproduce the issue on tensorflow v2.8, v2.9 and nightly. Kindly find the gist of it here.
-
-
github.com github.com
-
edited by ekdnamEditsIssue body actions
Edited the post to revise
-
so, some weights in new model have different shape compared to the old model.
-
Could you please submit a minimal code snippet for reproduction of issue. Also the dataset modified_train.txt is missing.
-
- May 2025
-
github.com github.com
-
keras_core subsequently will release as Keras3 and tf.keras module will become legacy code. Thanks!
dead code removed
-
I have checked the code with keras_core which is now a multi backend support library. In keras_core there is no reported behaviour. Please refer to attached gist.
-
loop in reconstruct_from_config()
-
I was able to replicate this issue on colab, please find the gist here. Thank you!
-
-
github.com github.com
-
The area which corresponds to the difference in the lite versions are actually different in the h5 files. Furthermore, the dimensionality of some elements have changed from the start to the red arrow and downwards. Can either of you try again when the model architecture is identical prior to conversion?
-
I found that the branch of "Quantize "node for concat OP quantization is different (as shown in the figure below).letf is ok, and right is bad.
-
t is not clear how you are getting different results on quantization perhaps you can explain more. Are you observing incorrect results with tf.compat.v1.lite.TFLiteConverter.from_keras_model_file and correct results with tf.lite.TFLiteConverter.from_keras_model ?
-
-
github.com github.com
-
copybara-service bot merged commit 749de42 into master Feb 12, 2025 2 checks passed
-
eleted the exported_pr_715840018 branch
-
force-pushed the exported_pr_715840018 branch 19 times, most recently from 7ed12b8 to 4c7bc36 Compare February 11, 2025 03:16
-
-
github.com github.com
-
It worked. Thank you very much.
-
tf.app.run is deprecated in TF 2.x, please use TF tf.compat.v1.app.run for TF 2.12.
-
-
github.com github.com
-
B-Step62 merged commit 4eba574 into mlflow:master Feb 3, 2025 44 of 47 checks passed
-
Hence, this PR removes the system prompt from input example.
-
s a result, when users build ChatModel / ChatAgent with those providers, they cannot log the model with a confusing error: with mlflow.start_run():
-
-
github.com github.com
-
1 check passed
1 check passed
-
deleted
-
Update DetermineArgumentLayoutsFromCompileOptions to not overwrite pa… …
-
Update DetermineArgumentLayoutsFromCompileOptions to not overwrite parameter & result memory spaces.
-
-
github.com github.com
-
deleted
deleted a branch
-
Mark tracing APIs as experimental …
-
Mark all user-facing tracing APIs as experimental.
-
-
github.com github.com
-
Reaching basic code quality prevents large merge conflicts, and allows for testing so PRs don’t break functionality. It also allows for a speedy pr review process. Until then, I’ll be maintaining an active fork with best practices for python development. Feel free to fork this code to make the idea work in your repo.
-
CI is red
-
I’ll start actively maintaining it after some (any) pr making scripts a module is merged. That’s a large part of the diff, and continuing to contribute to the repo is just not worth the effort without that change.
-
Now the system doesn't need to request the users response
-
The main client currently connects to the rabbitmq database but occasionally connection is being dropped. I'm having trouble to get the AI to use the commands in order for me to debug them. The qa client has been rewritten using Rich to provide a very nice UI for question answering, but I haven't tested it yet. I'm a bit bogged down at work. Still hope to work on it and be done this week, at least this weekend
-
Renamed scripts to autogpt. Absolute Imports. isort. Fixed flake8 F40… …
added boilerplate code
-
-
-
I thought I could use it for free. If that works on your side, that's good.
-
Hi @Gumichocopengin8 Did you brought any credits for your OPEN AI API? The above error happens when you dont have any credits in your api.
Identifed the issue
-
I did it too as well as OPENAI_API_KEY=xxx python code.py, but I don't think it matters. both didn't work
Implemented the proposed fix
-
Thanks for the report, and thanks @joelrobin18 for the PR. @Gumichocopengin8, does changing openai.ChatCompletion to openai.chat.completions fix the code sample for you?
check other version of openAI
-
-
github.com github.com
-
Thanks for your help. I could register the graph with MLflow 2.20. Please add this guidance to the documentation as well.
Version update
-
-
github.com github.com
-
Copy-and-paste code to reproduce this:
Reproduced the bug
-
Okay, I found out that I need to specify the parameter for subsample to be something less than 1 to get the score.
Parameter not specified
-
-
github.com github.com
-
adrinjalali approved these changes
approved
-
I left a few comments, please fix the rest of the PR accordingly.
reviewed the code
-
Unit tests created to verify correct warning messages are raised upon usage of n_alphas parameter. Unit tests created to verify correct warning message if using the default value of alphas. All unit tests passing and all warnings suppressed on existing test cases using filter warnings.
Unit test added
-
Updated LinearModelCV and derived classes LassoCV, ElasticNetCV, MultiTaskElasticNetCV, MultiTaskLassoCV to remove n_alphas parameter from the constructor. alphas parameter is updated to support an integer or array-like argument. Functionality of n_alphas is preserved by passing an integer to alphas. Parameter_constraints updated accordingly.
changed the derived class parameters
-
-
-
requested a review from a team as a code owner
asked for code review
-
Codecov Report
verified the test case result
Tags
Annotators
URL
-
-
github.com github.com
-
Onnx Build Logs Iteration-1.txtcyyever commented on Mar 21, 2025 cyyeveron Mar 21, 2025 · edited by cyyeverEditsContributor@vijayaramaraju-kalidindi The situation is tricky in your case because the protobuf libraries are all static libraries. I need to know your Linux distribution.
Reviewing the solution
-
A quick workaround is to apt-get remove protobuf packages and build onnx from https://github.com/cyyever/onnx/tree/protobuf6 . This branch of onnx will download and compile protobuf 6.30.1 as dependency automatically, which matches the python's protobuf .
Removing the dependency
-
We must detect such cases and link to protobuf::libprotobuf, which should be a shared library.
Proposed a new branch
-
Thanks this worked. Successfully built ONNX.
Verified the Fix
-
The PR has been merged, you could try the main
Merged the PR after Successful retest
-
This error originates from a subprocess, and is likely not a problem with pip.
Error Root originated
-
-
github.com github.com
-
copybara-servicementioned this on Apr 15, 2025PR #91416: Fix C compatibility issue in TfLiteQuantizationType enum google-ai-edge/LiteRT#1736
Fixed C compatibility issue
-
Fix C compatibility issue in TfLiteQuantizationType enum #91416
Implemented the fix
-
No problems using TensorFlow Lite 2.18. This commit 977257e caused issue.
Root Cause Analysis
-
I closing this issue because this PR #91416 got merged, if you face any issue please feel free to post your comments if required I'll open this issue.
Merging and Closing the Issue
-
-
github.com github.com
-
Fixes API Deprecate n_alphas in LinearModelCV #30616
Fixed the issue
-
adrinjalaliclosed this as completedin #30616
Approved & Merged
-
-
-
justinchubyclosed this as completed
Closed the issue
-
Could you test with the latest onnx-weekly package (you can install it from pip)? It may have been fixed
Experimental Build Tested
-
The contents of op_run.to_array_extended(t) and in 1.17.1 that of onnx.numpy_helper.to_array(t) as well may vary because it is actually an uninitialized array.
Root Cause Identified
-
-
github.com github.com
-
Add links to examples from the docstrings and user guides #26927
Improve the docs
-