autogluon.cloud.TabularCloudPredictor#
- class autogluon.cloud.TabularCloudPredictor(local_output_path: Optional[str] = None, cloud_output_path: Optional[str] = None, backend: str = 'sagemaker', verbosity: int = 2)[source]#
- __init__(local_output_path: Optional[str] = None, cloud_output_path: Optional[str] = None, backend: str = 'sagemaker', verbosity: int = 2) None #
- Parameters
local_output_path (Optional[str], default = None) – Path to directory where downloaded trained predictor, batch transform results, and intermediate outputs should be saved If unspecified, a time-stamped folder called “AutogluonCloudPredictor/ag-[TIMESTAMP]” will be created in the working directory to store all downloaded trained predictor, batch transform results, and intermediate outputs. Note: To call fit() twice and save all results of each fit, you must specify different local_output_path locations or don’t specify local_output_path at all. Otherwise files from first fit() will be overwritten by second fit().
cloud_output_path (Optional[str], default = None) – Path to s3 location where intermediate artifacts will be uploaded and trained models should be saved. This has to be provided because s3 buckets are unique globally, so it is hard to create one for you. If you only provided the bucket but not the subfolder, a time-stamped folder called “YOUR_BUCKET/ag-[TIMESTAMP]” will be created. If you provided both the bucket and the subfolder, then we will use that instead. Note: To call fit() twice and save all results of each fit, you must either specify different cloud_output_path locations or only provide the bucket but not the subfolder. Otherwise files from first fit() will be overwritten by second fit().
backend (str, default = "sagemaker") – The backend to use. Valid options are: “sagemaker” and “ray_aws”. SageMaker backend supports training, deploying and batch inference on AWS SageMaker. Only single instance training is supported. RayAWS backend supports distributed training by creating an ephemeral ray cluster on AWS. Deployment and batch inferenc are not supported yet.
verbosity (int, default = 2) – Verbosity levels range from 0 to 4 and control how much information is printed. Higher levels correspond to more detailed print statements (you can set verbosity = 0 to suppress warnings). If using logging, you can alternatively control amount of information printed via logger.setLevel(L), where L ranges from 0 to 50 (Note: higher values of L correspond to fewer print statements, opposite of verbosity levels).
Methods
Attach the current CloudPredictor to an existing endpoint.
Attach to a sagemaker training job.
Delete the deployed endpoint and other artifacts
Deploy a predictor to an endpoint, which can be used to do real-time inference later.
Detach the current endpoint and return it.
Download the trained predictor from the cloud.
Fit the predictor with the backend.
Generate required permission file in json format for CloudPredictor with your choice of backend.
Get general info of the batch inference job.
Get the status of the batch inference job.
Get the output path in the cloud of the trained artifact
Get the status of the training job.
Return general info about CloudPredictor
Load the CloudPredictor
Batch inference.
Batch inference When minimizing latency isn't a concern, then the batch transform functionality may be easier, more scalable, and more appropriate.
Predict probability with the deployed endpoint.
Predict with the deployed endpoint.
Save the CloudPredictor so that user can later reload the predictor to gain access to deployed endpoint.
Convert the Cloud trained predictor to a local AutoGluon Predictor.
Attributes
backend_map
endpoint_name
Return the CloudPredictor deployed endpoint name
is_fit
Whether this CloudPredictor is fitted already
predictor_file_name
predictor_type
Type of the underneath AutoGluon Predictor