Smarter Search
Get better results, faster
Our intelligent search focuses on promising regions, not random guesses. Find peak performance in a fraction of the time.
Hyperoptimizer runs hundreds of automated backtests to find the optimal parameters for your strategy — Sharpe ratio, drawdown, returns. No servers to manage, no manual trial-and-error.
But finding the right settings is harder than it sounds. Doing it manually takes weeks and still leaves you guessing. Tools like Ray Tune, Katib, and Optuna offer a smarter path — but use them in practice and you're provisioning servers, configuring clusters, and debugging infrastructure before your first trial even runs. Either way, it's time you don't have — or want to waste.
Hyperoptimizer was built to end that frustration.
We automate the infrastructure and the search, so you can get back to the work that actually matters - building better models.
Built for traders and strategy developers (and anyone running parameter-heavy experiments) who want better performance without spending weeks on manual backtesting.
You don't install software. Sign up, upload your Docker container, and define your parameter ranges in our dashboard.
Our infrastructure runs hundreds of trials in parallel, using Bayesian optimization to intelligently search for the best combination.
We deliver a clear leaderboard, convergence plots, and the optimal parameter set - ready to deploy.
Why HyperOptimizer?
Smarter Search
Our intelligent search focuses on promising regions, not random guesses. Find peak performance in a fraction of the time.
Simple setup
No clusters to set up, no scripts to maintain. Push your container, configure parameters in our dashboard, and we handle the rest.
Isolation
Your code runs in isolated containers. We only see the metrics you choose to output. Zero access to your source, data, or trading logic.
Scale
Trials run in parallel automatically. No cluster setup, no job queues - just faster answers.
Clarity
Ranked leaderboards, convergence plots, and detailed run comparisons - so you can pick the best configuration with confidence, not guesswork.
Bring your own cloud
Need complete control? Connect your cloud account and we run optimization trials on your infrastructure. Same powerful optimizer, same dashboard - but your code, data, and compute stay entirely in your hands.
How it works
No SDK, no client library, no lock-in. Two small changes to your code, and we run hundreds of trials for you.
We pass parameters as --hpo-* CLI flags at runtime. Parse them with argparse or any library you already use - no SDK to install, no vendor lock-in.
After your run finishes, print your metrics with the hpo.metrics. prefix. We pick them up automatically - no integration code required.
Trials run in parallel and the optimizer learns from each result. Watch your leaderboard update live as the best configuration rises to the top.
Can't find the answer you're looking for? Reach out to our support team.
Your code runs inside fully isolated containers on our infrastructure. We have zero access to your source code, strategy logic, or proprietary data. The only thing we read is stdout metric lines you explicitly print (e.g. hpo.metrics.sharpe=2.1). Containers are destroyed after each trial completes. We never store, inspect, or log your application's internal state, internal outputs, or filesystem.
Yes, that's the whole point. You package your code in a Docker container, and we run it as trials during optimization. You have full control over your objective function, dependencies, and runtime environment.
We're building Bring your own Cloud: you connect your cloud account and we schedule trials on your infrastructure so your data and code never leave your environment. Same optimizer and dashboard - we just use your compute. For early access, contact us or see our Bring your own cloud page.
We're finalizing our pricing model. Join the beta to get early-access pricing.
Yes. Multiple trials run in parallel across our infrastructure. The optimization algorithm suggests new hyperparameter combinations based on results from completed trials.
We support Bayesian optimization (e.g. TPE) and other search strategies. You configure the parameters, their ranges, and the metrics to optimize; we handle the rest.
Each trial runs independently. If one fails, it doesn't affect the others. Results from completed trials are always preserved, and the optimizer continues with the remaining budget.
Join the private beta (free) to find better settings in a fraction of the time. Try the platform, share your feedback, and help shape the future of optimization. Want us to support your framework or stack? Let us know.