×
BigML is working hard to support a wide range of browsers. Your experience will be better with:

Real-time, in-memory predictions in your own cloud

Have you built a great BigML model or ensemble and want to integrate it into your application or service with no hassles? Do you need to score new data in nearly real-time? Does your application require millions of predictions in batches? The BigML PredictServer is a dedicated cloud image which you can deploy to create blazingly fast predictions, easily. PredictServer is available from Amazon Web Services as an EC2 image.

Real-time Scoring with BigML PredictServer

The BigML PredictServer is a dedicated machine image that can be deployed in your own AWS account to provide fast and reliable predictions from BigML models and ensembles.

Easy

The API is similar to BigML.io and can be easily swapped in to support code integrated with bigml.io.

Fast

The predict server keeps everything in RAM and creates predictions in parallel. Performance is roughly: 1000 /(nº of models in an ensemble) * (nº of cores) predictions/s

Low Latency

The image can be deployed at any AWS instance worldwide, making it possible to lower the latency for predictions from your application.

Reliable

Because the machine is dedicated, there is no competition for resources.

Secure

Use security groups to limit access, or even deploy the image in a VPC.

Scalable

Performance scales with CPUs. You can instantiate a single or multiple BigML PredictServers and seamlessly integrate them into your existing data center.

Quick Start
  1. Deploy the BigML PredictServer from the AWS Marketplace using 1-click.
  2. In the AWS console, note the hostname and the instance-id of the BigML PredictServer. In the example below they are: ec2-54-221-20-10.compute-1.amazonaws.com and i-54f1bc39.
  3. Change the auth_token for the BigML PredictServer. In the example below, we are changing the auth_token to mytoken:

curl "https://ec2-54-221-20-10.compute-1.amazonaws.com/config?auth_token=i-54f1bc39" \
    -k -X POST \
    -H 'content-type: application/json' \
    -d '{"auth_token":"mytoken"}'
Deploying BigML PredictServer
  • Add a list of BigML usernames that are allowed to access the PredictServer. In this example, we are allowing the users wendy and shara:

curl "https://ec2-54-221-20-10.compute-1.amazonaws.com/config?auth_token=mytoken" \
    -k -X POST -H 'content-type: application/json' \
    -d '{"allow_users": [ "wendy", "shara" ]}'
Authorizing BigML users to access the PredictServer
  • Cache a model on the PredictServer using the same username and api_key from bigml.io:

curl -k -X GET "https://ec2-54-221-20-10.compute-1.amazonaws.com/model/51df6f52035d0760380042e7?username=wendy;api_key=92127b85415db7caa2ca985edfdbcaca766d836f"
Caching a model on the PredictServer:
  • Create a prediction:

curl "https://ec2-54-221-20-10.compute-1.amazonaws.com/prediction?username=wendy;api_key=92127b85415db7caa2ca985edfdbcaca766d836f"
    -k -X POST \
    -H 'content-type: application/json' \
    -d '{"model":"model/51df6f52035d0760380042e7", "input_data":{ "education-num": 15 }}'
Predicting with your model
Docs
Contact us

If you have any questions about BigML's PredictServer, get in touch right away. Contact us about scheduling a private demo at a time that works for you.