Accessing models through different services in AutoGluonRAG
.#
Depending on what service you are using to access certain models for different modules, you may need to provide access keys for each service.
Using GPT models:
Create an OpenAI account here: https://openai.com/
Access the API section after logging in. Go to the “API” tab or use this link to access the API dashboard
Select the appropriate billing plan for your account to access OpenAI models. Complete all necessary financial information and billing steps.
Generate and API key and store it in a
txt
file on your device. When using AutoGluon-RAG, make sure to specify the path to this file by setting theopenai_key_file
argument in the config file or through code (Refer to Setting Parameters forAutoGluonRAG
through code for more info).
Using AWS Bedrock models: You can either use the AWS CLI or manually set the AWS Keys in your command line configuration file (
bash_profile
orzshrc
file). If you are doing it manually, make sure to set the following parameters:AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
Using Huggingface models: You can use the Hugging Face Command Line Interface (CLI) to access Hugging Face models. Follow these steps to access Hugging Face models:
Installation
pip install -U "huggingface_hub[cli]"
Once installed, check that the CLI is correctly setup:
huggingface-cli --help
Login to your Hugging Face account
First, create a Hugging Face account here.
Then, obtain an access token using this link. You can find more information about User access tokens here.
Once you have your token, run the following command in your terminal:
huggingface-cli login
Enter your access token when prompted. You can optionally use the Hugging Face token as a
git
credential if you plan to usegit
locally and contribute to this package.