Recommended Models qwen3-coder glm-4.7 gpt-oss:20b gpt-oss:120b
the local models ollama recommends for use in claudecode - qwen3-code - glm-4.7 - gpt-oss:20b - gpt-oss:120b
Recommended Models qwen3-coder glm-4.7 gpt-oss:20b gpt-oss:120b
the local models ollama recommends for use in claudecode - qwen3-code - glm-4.7 - gpt-oss:20b - gpt-oss:120b
ollama can also be used with claude code through their endpoint. This allows to use the open Chinese coding models like qwen3
one can run claude code using local models through the LM Studio endpoint. so that you don't use Claude in the cloud.
https://web.archive.org/web/20250413154306/https://tailscale.com/ Tailscale allows your devices remote access to your local network, as a software defined network.
https://web.archive.org/web/20250413154154/https://blog.6nok.org/tailscale-is-pretty-useful/
Tailscale to access local network devices (found via Alper)
Do you know about lacolhost.com? as in, do something like blerg.lacolhost.com:3000/ as your url and it'll resolve to localhost:3000, which is where your tests are running.
I've developed additional perspective on this issue - I have DNS settings in my hosts file that are what resolve the visits to localhost, but also preserve the subdomain in the request (this latter point is important because Rails path helpers care which subdomain is being requested) To sum up the scope of the problem as it stands now - I need a way within Heroku/Capybara system tests to both route requests to localhost, but also maintain the subdomain information of the request. I've been able to accomplish one or the other, but haven't found a configuration that provides both yet.
It's wired to annotate your localhost publicly. LOL
Maybe we can chat this way.