Skip to content

Latest commit

 

History

History
178 lines (158 loc) · 7.8 KB

README.md

File metadata and controls

178 lines (158 loc) · 7.8 KB

OpenAI-Request-Wrapper-Backend

A Python backend for simplifying OpenAI API interactions

Developed with the software and tools below.

Framework-FastAPI-blue Language-Python-red Database-SQLite-blue LLMs-OpenAI-black
git-last-commit GitHub commit activity GitHub top language

📑 Table of Contents

  • 📍 Overview
  • 📦 Features
  • 📂 Structure
  • 💻 Installation
  • 🏗️ Usage
  • 🌐 Hosting
  • 📄 License
  • 👏 Authors

📍 Overview

This repository contains a Python backend server designed to streamline interactions with OpenAI's powerful AI models. The "AI Wrapper OpenAI Request Responder" provides a user-friendly interface for developers and individuals to leverage OpenAI's technology for various applications. This MVP focuses on simplifying the process of sending requests to OpenAI and receiving responses, eliminating the need for complex manual API call management.

📦 Features

Feature Description
⚙️ Architecture The backend utilizes a lightweight framework like Flask or FastAPI for efficient routing and API management.
📄 Documentation The repository includes a comprehensive README file detailing the MVP's features, usage, and deployment instructions.
🔗 Dependencies The project relies on essential packages such as FastAPI, Uvicorn, Pydantic, OpenAI, and Requests for its functionality.
🧩 Modularity The code is structured for modularity, with separate files for handling requests, API interaction, and response processing.
🧪 Testing Unit tests are implemented to ensure the code's functionality and stability.
⚡️ Performance The backend optimizes communication with OpenAI APIs for swift responses, employing efficient request handling and response processing.
🔐 Security Robust security measures protect API keys and user data, ensuring secure handling of sensitive information.
🔀 Version Control Utilizes Git for version control, employing a branching model for efficient development and maintenance.
🔌 Integrations Seamless integration with various applications and platforms is achieved using a REST API.
📶 Scalability The backend is designed to handle increasing request volumes efficiently.

📂 Structure

[object Object]

💻 Installation

🔧 Prerequisites

  • Python 3.10+
  • pip package manager
  • OpenAI API Key

🚀 Setup Instructions

  1. Clone the repository:
    git clone https://github.com/coslynx/OpenAI-Request-Wrapper-Backend.git
    cd OpenAI-Request-Wrapper-Backend
  2. Install dependencies:
    pip install -r requirements.txt
  3. Set up environment variables:
    cp .env.example .env
    • Open .env and replace YOUR_OPENAI_API_KEY_HERE with your actual OpenAI API key.
    • You can optionally set DATABASE_URL if you want to use a different database.

🏗️ Usage

🏃‍♂️ Running the Backend

uvicorn api.main:app --host 0.0.0.0 --port 8000

⚙️ Configuration

  • The utils/config.py file handles environment variables like OPENAI_API_KEY and DATABASE_URL. You can change them in the .env file.
  • The backend server listens on port 8000 by default. You can change this in startup.sh or by passing a different port to uvicorn when running the server.

📚 Examples

Making a Text Generation Request

curl -X POST http://localhost:8000/generate_text \
    -H "Content-Type: application/json" \
    -d '{"model": "text-davinci-003", "prompt": "Write a short story about a cat", "temperature": 0.7}'

Response:

{
  "response": "Once upon a time, in a cozy little cottage nestled amidst rolling hills, there lived a mischievous tabby cat named Whiskers. Whiskers was known for his playful antics and his insatiable appetite for tuna."
}

🌐 Hosting

🚀 Deployment Instructions

  1. Create a virtual environment:
    python -m venv .venv
    source .venv/bin/activate
  2. Install dependencies:
    pip install -r requirements.txt
  3. Set up environment variables:
    cp .env.example .env
  4. Run the application:
    uvicorn api.main:app --host 0.0.0.0 --port 8000
  5. Use a deployment platform like Heroku or AWS:
    • Follow the specific instructions for your chosen platform.
    • Make sure to set up the environment variables (API keys, database credentials, etc.) as required by your chosen platform.

🔑 Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key.
  • DATABASE_URL: Your database connection string (if using a database).

📜 API Documentation

🔍 Endpoints

  • POST /generate_text
    • Description: Generates text using OpenAI's models.
    • Request Body:
      {
        "model": "text-davinci-003", // OpenAI model to use
        "prompt": "Write a short story about a cat", // Text prompt
        "temperature": 0.7 // Controls randomness of the generated text
      }
    • Response:
      {
        "response": "Once upon a time, in a cozy little cottage..." // The generated text
      }

🔒 Authentication

  • This MVP does not implement user authentication. It is designed to be used with a single OpenAI API key stored in the environment variable.

📝 Examples

[See examples above]

📜 License & Attribution

📄 License

This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.

🤖 AI-Generated MVP

This MVP was entirely generated using artificial intelligence through CosLynx.com.

No human was directly involved in the coding process of the repository: OpenAI-Request-Wrapper-Backend

📞 Contact

For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:

🌐 CosLynx.com

Create Your Custom MVP in Minutes With CosLynxAI!