AI/ML API
Overviewβ
| Property | Details | 
|---|---|
| Description | AI/ML API provides access to state-of-the-art AI models including flux-pro/v1.1 for high-quality image generation. | 
| Provider Route on LiteLLM | aiml/ | 
| Link to Provider Doc | AI/ML API β | 
| Supported Operations | [ /chat/completions],/images/generations | 
LiteLLM supports AI/ML API Image Generation calls.
API Base, Keyβ
# env variable
os.environ['AIML_API_KEY'] = "your-api-key"
os.environ['AIML_API_BASE'] = "https://api.aimlapi.com"  # [optional] 
Getting started with the AI/ML API is simple. Follow these steps to set up your integration:
1. Get Your API Keyβ
To begin, you need an API key. You can obtain yours here:
π Get Your API Key
2. Explore Available Modelsβ
Looking for a different model? Browse the full list of supported models:
π Full List of Models
3. Read the Documentationβ
For detailed setup instructions and usage guidelines, check out the official documentation:
π AI/ML API Docs
4. Need Help?β
If you have any questions, feel free to reach out. Weβre happy to assist! π Discord
Usageβ
You can choose from LLama, Qwen, Flux, and 200+ other open and closed-source models on aimlapi.com/models. For example:
import litellm
response = litellm.completion(
    model="aiml/meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo", # The model name must include prefix "openai" + the model name from ai/ml api
    api_key="", # your aiml api-key 
    api_base="https://api.aimlapi.com/v2",
    messages=[
        {
            "role": "user",
            "content": "Hey, how's it going?",
        }
    ],
)
Streamingβ
import litellm
response = litellm.completion(
    model="aiml/Qwen/Qwen2-72B-Instruct",  # The model name must include prefix "openai" + the model name from ai/ml api
    api_key="",  # your aiml api-key 
    api_base="https://api.aimlapi.com/v2",
    messages=[
        {
            "role": "user",
            "content": "Hey, how's it going?",
        }
    ],
    stream=True,
)
for chunk in response:
    print(chunk)
Async Completionβ
import asyncio
import litellm
async def main():
    response = await litellm.acompletion(
        model="aiml/anthropic/claude-3-5-haiku",  # The model name must include prefix "openai" + the model name from ai/ml api
        api_key="",  # your aiml api-key
        api_base="https://api.aimlapi.com/v2",
        messages=[
            {
                "role": "user",
                "content": "Hey, how's it going?",
            }
        ],
    )
    print(response)
if __name__ == "__main__":
    asyncio.run(main())
Async Streamingβ
import asyncio
import traceback
import litellm
async def main():
    try:
        print("test acompletion + streaming")
        response = await litellm.acompletion(
            model="aiml/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", # The model name must include prefix "openai" + the model name from ai/ml api
            api_key="", # your aiml api-key
            api_base="https://api.aimlapi.com/v2",
            messages=[{"content": "Hey, how's it going?", "role": "user"}],
            stream=True,
        )
        print(f"response: {response}")
        async for chunk in response:
            print(chunk)
    except:
        print(f"error occurred: {traceback.format_exc()}")
        pass
if __name__ == "__main__":
    asyncio.run(main())
Async Embeddingβ
import asyncio
import litellm
async def main():
    response = await litellm.aembedding(
        model="aiml/text-embedding-3-small", # The model name must include prefix "openai" + the model name from ai/ml api
        api_key="",  # your aiml api-key
        api_base="https://api.aimlapi.com/v1", # π the URL has changed from v2 to v1
        input="Your text string",
    )
    print(response)
if __name__ == "__main__":
    asyncio.run(main())
Async Image Generationβ
import asyncio
import litellm
async def main():
    response = await litellm.aimage_generation(
        model="aiml/dall-e-3",  # The model name must include prefix "openai" + the model name from ai/ml api
        api_key="",  # your aiml api-key
        api_base="https://api.aimlapi.com/v1", # π the URL has changed from v2 to v1
        prompt="A cute baby sea otter",
    )
    print(response)
if __name__ == "__main__":
    asyncio.run(main())