TimeSeriesCloudPredictor.predict#

TimeSeriesCloudPredictor.predict(test_data: Union[str, DataFrame], static_features: Optional[Union[str, DataFrame]] = None, predictor_path: Optional[str] = None, framework_version: str = 'latest', job_name: Optional[str] = None, instance_type: str = 'ml.m5.2xlarge', instance_count: int = 1, custom_image_uri: Optional[str] = None, wait: bool = True, backend_kwargs: Optional[Dict] = None) Optional[DataFrame][source]#

Predict using SageMaker batch transform. When minimizing latency isn’t a concern, then the batch transform functionality may be easier, more scalable, and more appropriate. If you want to minimize latency, use predict_real_time() instead. To learn more: https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html This method would first create a AutoGluonSagemakerInferenceModel with the trained predictor, then create a transformer with it, and call transform in the end.

Parameters
  • test_data (str) – The test data to be inferenced. Can be a pandas.DataFrame or a local path to a csv file.

  • static_features (Optional[Union[str, pd.DataFrame]]) – An optional data frame describing the metadata attributes of individual items in the item index. For more detail, please refer to TimeSeriesDataFrame documentation: https://auto.gluon.ai/stable/api/autogluon.predictor.html#timeseriesdataframe

  • target (str) – Name of column that contains the target values to forecast

  • predictor_path (str) – Path to the predictor tarball you want to use to predict. Path can be both a local path or a S3 location. If None, will use the most recent trained predictor trained with fit().

  • framework_version (str, default = latest) – Inference container version of autogluon. If latest, will use the latest available container version. If provided a specific version, will use this version. If custom_image_uri is set, this argument will be ignored.

  • job_name (str, default = None) – Name of the launched training job. If None, CloudPredictor will create one with prefix ag-cloudpredictor.

  • instance_count (int, default = 1,) – Number of instances used to do batch transform.

  • instance_type (str, default = 'ml.m5.2xlarge') – Instance to be used for batch transform.

  • wait (bool, default = True) – Whether to wait for batch transform to complete. To be noticed, the function won’t return immediately because there are some preparations needed prior transform.

  • backend_kwargs (dict, default = None) –

    Any extra arguments needed to pass to the underneath backend. For SageMaker backend, valid keys are:

    1. download: bool, default = True

      Whether to download the batch transform results to the disk and load it after the batch transform finishes. Will be ignored if wait is False.

    2. persist: bool, default = True

      Whether to persist the downloaded batch transform results on the disk. Will be ignored if download is False

    3. save_path: str, default = None,

      Path to save the downloaded result. Will be ignored if download is False. If None, CloudPredictor will create one. If persist is False, file would first be downloaded to this path and then removed.

    4. model_kwargs: dict, default = dict()

      Any extra arguments needed to initialize Sagemaker Model Please refer to https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#model for all options

    5. transformer_kwargs: dict

      Any extra arguments needed to pass to transformer. Please refer to https://sagemaker.readthedocs.io/en/stable/api/inference/transformer.html#sagemaker.transformer.Transformer for all options.

    6. transform_kwargs:

      Any extra arguments needed to pass to transform. Please refer to https://sagemaker.readthedocs.io/en/stable/api/inference/transformer.html#sagemaker.transformer.Transformer.transform for all options.