LlamaIndex
LlamaIndex works with Glitch by configuring the OpenAI LLM before passing it to LlamaIndex. This secures all LLM calls made through LlamaIndex queries, retrievers, and agents.
Configuration
Section titled “Configuration”import osfrom llama_index.llms.openai import OpenAI
llm = OpenAI( model="gpt-4", api_key=os.environ["GLITCH_API_KEY"], api_base="https://api.golabrat.ai/v1",)Basic Example
Section titled “Basic Example”import osfrom llama_index.llms.openai import OpenAIfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Configure LlamaIndex with Glitchllm = OpenAI( model="gpt-4", api_key=os.environ["GLITCH_API_KEY"], api_base="https://api.golabrat.ai/v1",)
# Use in a query - all LLM calls are secureddocuments = SimpleDirectoryReader("data").load_data()index = VectorStoreIndex.from_documents(documents)query_engine = index.as_query_engine(llm=llm)response = query_engine.query("What is the main topic?")print(response)RAG Pipeline Example
Section titled “RAG Pipeline Example”import osfrom llama_index.llms.openai import OpenAIfrom llama_index.core import Settings, VectorStoreIndex, Document
# Configure global LLM with GlitchSettings.llm = OpenAI( model="gpt-4", api_key=os.environ["GLITCH_API_KEY"], api_base="https://api.golabrat.ai/v1",)
# All queries use the secured LLMdocuments = [Document(text="Your document content here")]index = VectorStoreIndex.from_documents(documents)query_engine = index.as_query_engine()response = query_engine.query("Your question here")print(response)Next Steps
Section titled “Next Steps”- Frameworks Overview — Other framework integrations
- Quick Start — Get running in 5 minutes
- API Reference — Full endpoint documentation