autogluon.cloud.MultiModalCloudPredictor#

class autogluon.cloud.MultiModalCloudPredictor(local_output_path: Optional[str] = None, cloud_output_path: Optional[str] = None, backend: str = 'sagemaker', verbosity: int = 2)[source]#
__init__(local_output_path: Optional[str] = None, cloud_output_path: Optional[str] = None, backend: str = 'sagemaker', verbosity: int = 2) None#
Parameters
  • local_output_path (Optional[str], default = None) – Path to directory where downloaded trained predictor, batch transform results, and intermediate outputs should be saved If unspecified, a time-stamped folder called “AutogluonCloudPredictor/ag-[TIMESTAMP]” will be created in the working directory to store all downloaded trained predictor, batch transform results, and intermediate outputs. Note: To call fit() twice and save all results of each fit, you must specify different local_output_path locations or don’t specify local_output_path at all. Otherwise files from first fit() will be overwritten by second fit().

  • cloud_output_path (Optional[str], default = None) – Path to s3 location where intermediate artifacts will be uploaded and trained models should be saved. This has to be provided because s3 buckets are unique globally, so it is hard to create one for you. If you only provided the bucket but not the subfolder, a time-stamped folder called “YOUR_BUCKET/ag-[TIMESTAMP]” will be created. If you provided both the bucket and the subfolder, then we will use that instead. Note: To call fit() twice and save all results of each fit, you must either specify different cloud_output_path locations or only provide the bucket but not the subfolder. Otherwise files from first fit() will be overwritten by second fit().

  • backend (str, default = "sagemaker") – The backend to use. Valid options are: “sagemaker” and “ray_aws”. SageMaker backend supports training, deploying and batch inference on AWS SageMaker. Only single instance training is supported. RayAWS backend supports distributed training by creating an ephemeral ray cluster on AWS. Deployment and batch inferenc are not supported yet.

  • verbosity (int, default = 2) – Verbosity levels range from 0 to 4 and control how much information is printed. Higher levels correspond to more detailed print statements (you can set verbosity = 0 to suppress warnings). If using logging, you can alternatively control amount of information printed via logger.setLevel(L), where L ranges from 0 to 50 (Note: higher values of L correspond to fewer print statements, opposite of verbosity levels).

Methods

attach_endpoint

Attach the current CloudPredictor to an existing endpoint.

attach_job

Attach to a sagemaker training job.

cleanup_deployment

Delete the deployed endpoint and other artifacts

deploy

Deploy a predictor to an endpoint, which can be used to do real-time inference later.

detach_endpoint

Detach the current endpoint and return it.

download_trained_predictor

Download the trained predictor from the cloud.

fit

Fit the predictor with the backend.

generate_default_permission

Generate required permission file in json format for CloudPredictor with your choice of backend.

get_batch_inference_job_info

Get general info of the batch inference job.

get_batch_inference_job_status

Get the status of the batch inference job.

get_fit_job_output_path

Get the output path in the cloud of the trained artifact

get_fit_job_status

Get the status of the training job.

info

Return general info about CloudPredictor

leaderboard

load

Load the CloudPredictor

predict

Batch inference.

predict_proba

Batch inference When minimizing latency isn't a concern, then the batch transform functionality may be easier, more scalable, and more appropriate.

predict_proba_real_time

Predict probability with the deployed endpoint.

predict_real_time

Predict with the deployed endpoint.

save

Save the CloudPredictor so that user can later reload the predictor to gain access to deployed endpoint.

to_local_predictor

Convert the Cloud trained predictor to a local AutoGluon Predictor.

Attributes

backend_map

endpoint_name

Return the CloudPredictor deployed endpoint name

is_fit

Whether this CloudPredictor is fitted already

predictor_file_name

predictor_type

Type of the underneath AutoGluon Predictor