First, install OpenAI’s Python package:
pip install openaiimport os
from dotenv import load_dotenv
import openai
# Load environment variables from .env file
load_dotenv()
# Set up your API key from environment variable
openai.api_key = os.getenv("OPENAI_API_KEY")
def generate_text(prompt, max_tokens=100):
"""
Generate text using GPT-4 based on the provided prompt.
Args:
prompt (str): The input prompt for text generation
max_tokens (int): Maximum number of tokens to generate
Returns:
str: Generated text
"""
try:
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
max_tokens=max_tokens,
temperature=0.7,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0
)
# Extract the generated text from the response
return response.choices[0].message.content
except Exception as e:
return f"Error generating text: {str(e)}"
# Example usage
if __name__ == "__main__":
prompt = "Write a short paragraph about artificial intelligence."
generated_text = generate_text(prompt)
print("Generated Text:")
print(generated_text)Output:
Generated Text:
Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, problem-solving, perception, and language understanding. AI technology is now frequently applied in areas such as robotics, voice recognition, image recognition, natural language processing, and many others. It has the potential to greatly impact various sectors of society, from healthcare and education to business and entertainment, by automating tasks and providing insightful data analysis.
To run Meta’s LLaMA 2 model locally, install:
pip install transformers torch accelerateLet me walk you through the entire process of downloading and using LLaMA 2 models from scratch:
LLaMA 2 is not freely available — you must first request access from Meta.
Once you’re approved, you have two options to get the model files:
Option A: Direct Download from Meta
C:\AI\llama-2-7b-chat).Option B: Using Hugging Face with Approval
pip install huggingface_hubhuggingface-cli loginhuggingface-cli download meta-llama/Llama-2-7b-chat-hf --local-dir ./my-llama-modelCreate a new Python file (e.g., llama_inference.py) with this code:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Path to your locally downloaded model files
local_model_path = "path/to/downloaded/llama-2-7b-chat" # Change this to your actual path
# Initialize tokenizer from local files
tokenizer = AutoTokenizer.from_pretrained(local_model_path, use_fast=False)
# Initialize model from local files
model = AutoModelForCausalLM.from_pretrained(
local_model_path,
torch_dtype=torch.float16,
device_map="auto"
)
# Prepare input using LLaMA 2 chat template
input_text = "Explain reinforcement learning."
chat = [{"role": "user", "content": input_text}]
prompt = tokenizer.apply_chat_template(chat, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
output = model.generate(
inputs.input_ids,
max_new_tokens=500,
temperature=0.7,
do_sample=True,
top_p=0.9,
)
# Decode and print the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)