AutoGluon-Cloud: Train and Deploy AutoGluon on the Cloud
AutoGluon-Cloud aims to provide user tools to train, fine-tune and deploy AutoGluon backed models on the cloud. With just a few lines of code, users can train a model and perform inference on the cloud without worrying about MLOps details such as resource management.
Currently, AutoGluon-Cloud supports AWS SageMaker as the cloud backend.
importpandasaspdfromautogluon.cloudimportTabularCloudPredictortrain_data=pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv")test_data=pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv")predictor_init_args={"label":"class"}# init args you would pass to AG TabularPredictorpredictor_fit_args={"train_data":train_data,"time_limit":120}# fit args you would pass to AG TabularPredictorcloud_predictor=TabularCloudPredictor(cloud_output_path="YOUR_S3_BUCKET_PATH")cloud_predictor.fit(predictor_init_args=predictor_init_args,predictor_fit_args=predictor_fit_args)cloud_predictor.deploy()result=cloud_predictor.predict_real_time(test_data)cloud_predictor.cleanup_deployment()# Batch inferenceresult=cloud_predictor.predict(test_data)
Multimodal
importpandasaspdfromautogluon.cloudimportMultiModalCloudPredictortrain_data=pd.read_csv("https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet")test_data=pd.read_csv("https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet")predictor_init_args={"label":"label"}# init args you would pass to AG MultiModalPredictorpredictor_fit_args={"train_data":train_data}# fit args you would pass to AG MultiModalPredictorcloud_predictor=MultiModalCloudPredictor(cloud_output_path="YOUR_S3_BUCKET_PATH")cloud_predictor.fit(predictor_init_args=predictor_init_args,predictor_fit_args=predictor_fit_args)cloud_predictor.deploy()result=cloud_predictor.predict_real_time(test_data)cloud_predictor.cleanup_deployment()# Batch inferenceresult=cloud_predictor.predict(test_data)
TimeSeries
importpandasaspdfromautogluon.cloudimportTimeSeriesCloudPredictordata=pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/cloud/timeseries_train.csv")id_column="item_id"timestamp_column="timestamp"target="target"predictor_init_args={"target":target}# init args you would pass to AG TimeSeriesCloudPredictorpredictor_fit_args={"train_data":data,"time_limit":120}# fit args you would pass to AG TimeSeriesCloudPredictorcloud_predictor=TimeSeriesCloudPredictor(cloud_output_path="YOUR_S3_BUCKET_PATH")cloud_predictor.fit(predictor_init_args=predictor_init_args,predictor_fit_args=predictor_fit_args,id_column=id_column,timestamp_column=timestamp_column)cloud_predictor.deploy()result=cloud_predictor.predict_real_time(test_data=data,id_column=id_column,timestamp_column=timestamp_column,target=target)cloud_predictor.cleanup_deployment()# Batch inferenceresult=cloud_predictor.predict(test_data=data,id_column=id_column,timestamp_column=timestamp_column,target=target)
pipinstall-Upip
pipinstall-Usetuptoolswheel
pipinstallautogluon.cloud# You don't need to install autogluon itself locally
pipinstall--upgradesagemaker# This is required to ensure the information about newly released containers is available.