From fa56e52f16637a2bc681c634e9393ab369b380a9 Mon Sep 17 00:00:00 2001 From: MemaroX Date: Thu, 18 Sep 2025 15:39:42 +0300 Subject: [PATCH] Created All GenAi docs --- Models.md | 31 ++++++++++++ Usage.md | 27 +++++++++++ example | 140 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ overview | 27 +++++++++++ 4 files changed, 225 insertions(+) create mode 100644 Models.md create mode 100644 Usage.md create mode 100644 example create mode 100644 overview diff --git a/Models.md b/Models.md new file mode 100644 index 0000000..bc1d4fc --- /dev/null +++ b/Models.md @@ -0,0 +1,31 @@ +### **Ghaymah GenAI: Available Models** + +This page provides an overview of the powerful AI models available through the Ghaymah GenAI API. Each model is designed with a unique architecture and set of strengths, allowing you to choose the best tool for your specific application. + +### **Model Descriptions** + +- **QwQ-32B:** A reasoning model from the Qwen series, excelling in thinking and complex problem-solving. It is particularly strong in mathematics and programming tasks, making it a great choice for technical applications. + +- **DeepSeek-V3-0324:** A robust Mixture-of-Experts (MoE) model. This architecture activates a smaller number of parameters per token, making it efficient while delivering excellent performance in reasoning, mathematics, and code generation. + +- **gemma-3-4b-it:** A lightweight, state-of-the-art model designed for efficiency and versatility. It is highly capable of handling general-purpose tasks and is ideal for applications where speed and a smaller footprint are priorities. + +- **Qwen3-32B:** The latest model from the Qwen series, known for its exceptional reasoning and coding abilities. It also offers extensive multilingual support, making it a top choice for global applications. + +- **GLM-4.5-Air:** A reasoning model from the GLM series. It is a more compact version of its flagship counterpart, offering a balance of enhanced logical and mathematical capabilities with a more efficient resource footprint. It's particularly strong in tool-calling and agentic tasks. + +- **Kimi-K2-Instruct:** A large AI model developed by Moonshot AI. It is highly praised for its ability to understand and excel in long-context conversations and is also a strong performer in multilingual support. + + +### **Model Comparison** + +To help you select the most suitable model, refer to the table below, which compares each model's key strengths and ideal use cases. + +| Model | Primary Strengths | Best Suited For | Real-World Application Examples | +| :--- | :--- | :--- | :--- | +| **QwQ-32B** | Strong reasoning, math, and programming capabilities. Efficient. | Technical problem-solving, code generation, and mathematical computations. | A **coding assistant** that auto-completes code, generates complex functions, or debugs logical errors in a program. An **AI tutor** for a STEM education platform. | +| **DeepSeek-V3-0324** | Efficient Mixture-of-Experts (MoE) architecture. Excellent performance in reasoning and code. | Balanced performance for general-purpose use, especially in coding and logical tasks. | A **general-purpose chatbot** for a tech support website that can answer a wide range of questions, from simple queries to more complex coding problems. | +| **gemma-3-4b-it** | Lightweight and fast. Excellent for general text generation and instruction following. | Resource-constrained environments, quick chatbots, and simple text-based applications. | A **mobile app chatbot** that provides quick, real-time responses. An **internal tool** for generating short summaries of emails or reports. | +| **Qwen3-32B** | Exceptional reasoning and coding abilities. Strong multilingual support. | Applications requiring sophisticated logic, complex coding, or global language support. | A **multilingual customer support system** that handles inquiries in various languages. A **software development platform** that can explain complex codebases in multiple languages. | +| **GLM-4.5-Air** | Enhanced logical and mathematical reasoning. Excellent at tool-calling and agentic workflows. | Building AI agents, automating multi-step tasks, and applications that require external tool integration. | An **AI agent** that can book flights for a user by calling an external flight-booking API, check the weather, and send a confirmation email. A **data analysis tool** that can run multiple commands and generate a comprehensive report. | +| **Kimi-K2-Instruct** | Outstanding long-context understanding. Robust multilingual and conversational abilities. | Chatbots, summarization of long documents, and applications requiring in-depth conversational recall. | A **chatbot for legal document review** that can answer questions about a 100-page contract. A **meeting summarization tool** that generates a detailed recap of a long transcript. | \ No newline at end of file diff --git a/Usage.md b/Usage.md new file mode 100644 index 0000000..ebe8f00 --- /dev/null +++ b/Usage.md @@ -0,0 +1,27 @@ +### **Ghaymah GenAI: Usage Dashboard** + +This page provides real-time insights into your API usage and helps you monitor your consumption and adherence to rate limits. + +---------- + +### **Your Usage** + +- **Usage Percentage**: This section provides a visual representation of your API usage against your current plan's limits. For example, if your plan includes a monthly token or request limit, this bar shows how close you are to reaching that ceiling. + + +---------- + +### **Rate Limits** + +- **Requests per minute**: This is a crucial metric that defines how many API requests you can make within a one-minute timeframe. This limit is in place to ensure fair usage and maintain the stability and performance of the API for all users. If you exceed this number, subsequent requests will be temporarily denied. + + +---------- + +### **API Keys** + +- **Usage per Key**: This section displays each of your active API keys along with its specific rate limit (Requests per Minute, or **RPM**). This is particularly useful if you have multiple keys for different projects, as it allows you to track and manage the usage of each one individually. + + - **Created**: The date and time when the API key was generated. + + - **RPM**: The specific rate limit assigned to that particular key. \ No newline at end of file diff --git a/example b/example new file mode 100644 index 0000000..bdc0e9c --- /dev/null +++ b/example @@ -0,0 +1,140 @@ +### **Ghaymah GenAI: Code Examples** + +This section provides ready-to-use code snippets to help you get started with the Ghaymah GenAI API quickly. You can switch between different programming languages to find the example that best suits your project. + +To begin, make sure you have your **API key** from the API Keys section. + +#### **Python Example** + +This example demonstrates how to make a chat completion request using Python with the `openai` library, which is compatible with our API. + +Prerequisites: + +Before running the code, you need to install the openai library. You can do this using pip: +```bash +pip install openai +``` +**Code:** + + + +```Python +from openai import OpenAI + +# Initialize the client with your API key and the base URL +# It's a best practice to use environment variables for your API key. +# Replace 'YOUR_API_KEY' with your actual Ghaymah GenAI key. +client = OpenAI( + api_key="YOUR_API_KEY", + base_url="https://genai.ghaymah.systems" +) + +# Make a request to the chat completions endpoint +response = client.chat.completions.create( + model="DeepSeek-V3-0324", ## Here You can choose the model you prefer + messages=[ + { + "role": "user", + "content": "Explain AI in simple terms" + } + ], + max_tokens=100 #Max number of tokens your about should be. +) + +# Print the content of the response from the model +print(response.choices[0].message.content) + +``` + +**Code Breakdown:** + +- `from openai import OpenAI`: Imports the necessary class to interact with the API. + +- `client = OpenAI(...)`: Creates an instance of the client. + + - `api_key`: Your unique API key for authentication. + + - `base_url`: The base endpoint for all API requests. + +- `client.chat.completions.create(...)`: This is the core API call. + + - `model`: Specifies which AI model you want to use for the request. In this case, `DeepSeek-V3-0324`. + + - `messages`: An array of message objects that form the conversation history. This example includes a single user message. + + - `role`: The role of the speaker (`user`, `assistant`, or `system`). + + - `content`: The text of the message. + + - `max_tokens`: The maximum number of tokens the model is allowed to generate in its response. + +- `print(...)`: Retrieves the text generated by the model and prints it to the console. + + +---------- + +#### **JavaScript Example** + +This example demonstrates how to make a similar chat completion request using JavaScript. + +Prerequisites: + +You can use npm to install the openai package. + +```bash +npm install openai +``` +**Code:** + + +```JavaScript + +import OpenAI from "openai"; + +// Initialize the client with your API key and the base URL +// It's a best practice to use environment variables for your API key. +// Replace 'YOUR_API_KEY' with your actual Ghaymah GenAI key. +const client = new OpenAI({ + apiKey: "YOUR_API_KEY", + baseURL: "https://genai.ghaymah.systems" +}); + +async function main() { + try { + const response = await client.chat.completions.create({ + model: "DeepSeek-V3-0324", + messages: [ + { + "role": "user", + "content": "Explain AI in simple terms" + } + ], + max_tokens: 100 + }); + + console.log(response.choices[0].message.content); + } catch (error) { + console.error("Error making API call:", error); + } +} + +main(); + +``` + +**Code Breakdown:** + +- `import OpenAI from "openai";`: Imports the OpenAI client library. + +- `const client = new OpenAI(...)`: Initializes the client with your API key and the custom base URL. + +- `async function main()`: Defines an asynchronous function to handle the API call. + +- `client.chat.completions.create(...)`: Makes the API call with the same parameters as the Python example (`model`, `messages`, `max_tokens`). + +- `console.log(...)`: Logs the model's generated content to the console. + +- `try...catch`: Includes basic error handling to catch and log any issues with the API request. + +- `main()`: Calls the main function to execute the code. + \ No newline at end of file diff --git a/overview b/overview new file mode 100644 index 0000000..4df76a6 --- /dev/null +++ b/overview @@ -0,0 +1,27 @@ +### **Ghaymah GenAI API Key Management** + +This section of your account dashboard is your central hub for managing your access to the Ghaymah GenAI platform. It provides the essential information needed to securely connect to our powerful AI models. + +---------- + +### **Connection Information** + +To interact with the Ghaymah GenAI API, you'll need two things: a **base URL** and an **API key**. The base URL serves as the starting point for all your API calls. It ensures your requests are directed to the correct server. + +---------- + +### **Your API Key** + +Your API key is a unique credential used to authenticate your requests. It acts as a digital signature, verifying that you have permission to access the services you're requesting. To get started, simply copy your key from this page and include it in the header of your API calls. + +---------- + +### **API Key Security** + +Protecting your API key is crucial. Think of it as a **secret password** for your account. If it falls into the wrong hands, others could make unauthorized requests on your behalf. + +- **Keep It Secret:** Never share your API key publicly, embed it in client-side code, or store it in public repositories like GitHub. + +- **Use Environment Variables:** The most secure way to manage your key is to store it as an **environment variable** on your server. This ensures it's loaded securely at runtime, preventing it from being exposed in your source code. + +- **Official Base URL:** Always use the official base URL (`https://genai.ghaymah.systems`) to ensure you are connecting to the legitimate Ghaymah GenAI service. \ No newline at end of file