Could you try updating MLflow to 2.20 or newer? The dictionary parameter type is supported since 2.20.
check version
Could you try updating MLflow to 2.20 or newer? The dictionary parameter type is supported since 2.20.
check version
I tried to apply the code you suggested but it's not working well:
You can pass different thread ID (params) at runtime. The one passed to log_model is just an input "example" for MLflow to determine the input signature (type) for the model.
Discussion
You can use params to pass the configurable object including thread ID.
suggested a new parameter
Maybe try changing this line in autogpt/processing/text.py ? def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: honestly, I'm still checking to see if that'd be it, but doubtful lol
Hi @cheyennee , OOM errors are not problem with Tensorflow and with low number of filters it is indeed working. Oh Sorry,I missed this from your logs.
It appears that the number of filters does make a difference 😂. In your gist, where the number of filters is set to 40, there are no crashes. However, I repo above code in colab, in the provided gist, where the number of filters is increased to 1792, it crashes, and the error message suggests a potential OOM issue.
I have tested the given code on colab and its working fine.Please refer attached gist. Please note that I have reduced the no of filters due to Memory constraints but it should not affect the reported behaviour. Could you please verify the behaviour attached. Can you confirm whether the issue with Windows Package as it will download intel package?
Make sure docker is installed and you have permission to run docker commands docker run hello-world
UPDATE: Now it uses pinecone. I had previously typed PINECONE in upper case letters. That was the problem. Now the line in my .env file looks like this: MEMORY_BACKEND=pinecone It crashes on the first run because it takes some time for the Pinecone index to initialize. That´s normal. Still testing if it actually codes something now... UPDATE_2: Now it wants to install a code editor like pycharm :-) Never seen it try something like this. I´ve tried to give it human feedback: you dont need a code editor. just use the write_to_file function. Sadly that confused the ai.....React
add a new Claude-based workflow for when dependabot opens a pr to have Claude review it. Base it on the claude.yml workflow and make sure to include the existing setup, just add a custom prompt. research the best way to do this with the claude github action and make it look up the change log for the dependobot for all the changed dependencies + check them for breaking changes + let us know if we're impacted
Correct Approach
The agent blocks are missing their input/output pins because the input_schema and output_schema properties are not being populated in the GraphMeta objects when flows are loaded. When these are undefined, the CustomNode component falls back to empty schemas {}, resulting in no pins being rendered.
When rendered in CustomNode.tsx (lines 132-137), agent blocks replace their schema with the hardcoded values:
The Fix Applied
I just deleted the three lines in autogpt/promt.py. Maybe not the nicest solution, but works so far.
Could make do_nothing a lower variablity maybe?
Maybe you can try again with: o EXECUTE_LOCAL_COMMANDS=false o RESTRICT_TO_WORKSPACE=true See if the file is written to the folder " auto_gpt_workspace folde
Its the plugins that need updating.
Try using something different than the local memory. I downloaded the code 5 days ago so I don't know if it has been changed but inside the config.py file in the scripts folder on line 75, the memory backend is hardcoded to local. Change local to pinecone and use a pinecone API key if you want.
the gist notebook executed successfully however am still getting the error on this machine :
Could you try to modify the tf.keras to keras and execute the code. I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here. Thank you!
Also I have changed some steps like modifying tf_keras/keras.Sequential instead of tf.keras.Sequential and the code was executed without error/fail. Kindly find the gist of it here.