Update README.md

هذا الالتزام موجود في:
2025-09-08 17:12:14 +00:00
الأصل 8153040ff4
التزام df7ed4955e

736
README.md
عرض الملف

@@ -1,369 +1,369 @@
# 🚀 LLM-Powered URL Health Checker Generator # 🚀 LLM-Powered URL Health Checker Generator
[![Python](https://img.shields.io/badge/Python-3.10%2B-blue.svg)](https://python.org) [![Python](https://img.shields.io/badge/Python-3.10%2B-blue.svg)](https://python.org)
[![LLM APIs](https://img.shields.io/badge/LLM-Multi--Provider-green.svg)](https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator/src/branch/main) [![LLM APIs](https://img.shields.io/badge/LLM-Multi--Provider-green.svg)](https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator/src/branch/main)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Jupyter](https://img.shields.io/badge/Jupyter-Notebook-orange.svg)](https://jupyter.org) [![Jupyter](https://img.shields.io/badge/Jupyter-Notebook-orange.svg)](https://jupyter.org)
🤖 **AI-Powered Code Generation** | 🌐 **URL Health Monitoring** | 📊 **DevOps Tooling** | ⚡ **Automated CLI Tools** 🤖 **AI-Powered Code Generation** | 🌐 **URL Health Monitoring** | 📊 **DevOps Tooling** | ⚡ **Automated CLI Tools**
> **Transform ideas into code with ANY LLM API!** This project showcases how to harness various Large Language Model APIs (OpenAI, Anthropic Claude, Google Gemini, Azure, local models, and more) to automatically generate production-ready Python CLI tools. Watch as AI creates a sophisticated URL health monitoring system that rivals hand-coded solutions. > **Transform ideas into code with ANY LLM API!** This project showcases how to harness various Large Language Model APIs (OpenAI, Anthropic Claude, Google Gemini, Azure, local models, and more) to automatically generate production-ready Python CLI tools. Watch as AI creates a sophisticated URL health monitoring system that rivals hand-coded solutions.
## 🎯 What This Project Does ## 🎯 What This Project Does
This project demonstrates cutting-edge AI-assisted development by: This project demonstrates cutting-edge AI-assisted development by:
- 🧠 **Intelligently generating** complete Python applications using any LLM API - 🧠 **Intelligently generating** complete Python applications using any LLM API
- 🔍 **Creating a professional URL health checker** with advanced monitoring capabilities - 🔍 **Creating a professional URL health checker** with advanced monitoring capabilities
- 📈 **Showcasing practical AI applications** in DevOps and system administration - 📈 **Showcasing practical AI applications** in DevOps and system administration
- 🛠️ **Providing a universal framework** for AI-powered code generation across multiple providers - 🛠️ **Providing a universal framework** for AI-powered code generation across multiple providers
## 🌟 Supported LLM Providers ## 🌟 Supported LLM Providers
| Provider | Models | Configuration | | Provider | Models | Configuration |
|----------|---------|--------------| |----------|---------|--------------|
| **OpenAI** | GPT-4, GPT-3.5-Turbo | Standard OpenAI API | | **OpenAI** | GPT-4, GPT-3.5-Turbo | Standard OpenAI API |
| **Anthropic** | Claude 3 Opus, Sonnet, Haiku | Claude API | | **Anthropic** | Claude 3 Opus, Sonnet, Haiku | Claude API |
| **Google** | Gemini Pro, Gemini Ultra | Google AI Studio | | **Google** | Gemini Pro, Gemini Ultra | Google AI Studio |
| **Azure OpenAI** | GPT-4, GPT-3.5 | Azure OpenAI Service | | **Azure OpenAI** | GPT-4, GPT-3.5 | Azure OpenAI Service |
| **Cohere** | Command, Generate | Cohere API | | **Cohere** | Command, Generate | Cohere API |
| **Hugging Face** | Various open models | Inference API | | **Hugging Face** | Various open models | Inference API |
| **Local Models** | LLaMA, Mistral, etc. | Ollama, LM Studio | | **Local Models** | LLaMA, Mistral, etc. | Ollama, LM Studio |
| **Custom APIs** | Any OpenAI-compatible | Custom endpoints | | **Custom APIs** | Any OpenAI-compatible | Custom endpoints |
## 🚀 Features ## 🚀 Features
### Core Framework ### Core Framework
- **Universal LLM Support**: Works with any OpenAI-compatible API endpoint - **Universal LLM Support**: Works with any OpenAI-compatible API endpoint
- **Provider Flexibility**: Easy switching between different LLM providers - **Provider Flexibility**: Easy switching between different LLM providers
- **Smart Configuration Management**: Secure environment variable handling with validation - **Smart Configuration Management**: Secure environment variable handling with validation
- **AI-Powered Code Generation**: Uses your choice of LLM to generate complete, functional Python scripts - **AI-Powered Code Generation**: Uses your choice of LLM to generate complete, functional Python scripts
- **Modular Design**: Clean, reusable configuration class for API integration - **Modular Design**: Clean, reusable configuration class for API integration
- **Error Handling**: Robust validation and error management across providers - **Error Handling**: Robust validation and error management across providers
### Generated URL Health Checker ### Generated URL Health Checker
- **Batch URL Testing**: Process multiple URLs from input files - **Batch URL Testing**: Process multiple URLs from input files
- **Performance Monitoring**: Measure and report response times - **Performance Monitoring**: Measure and report response times
- **Status Code Tracking**: Monitor HTTP response codes - **Status Code Tracking**: Monitor HTTP response codes
- **Configurable Parameters**: Customizable timeouts and retry mechanisms - **Configurable Parameters**: Customizable timeouts and retry mechanisms
- **CSV Export**: Structured data output for analysis - **CSV Export**: Structured data output for analysis
- **Summary Statistics**: Real-time reporting of test results - **Summary Statistics**: Real-time reporting of test results
- **CLI Interface**: Professional command-line tool with argument parsing - **CLI Interface**: Professional command-line tool with argument parsing
## ⚡ Quick Start ## ⚡ Quick Start
```bash ```bash
# 1. Clone and setup # 1. Clone and setup
git clone https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator.git git clone https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator.git
cd LLM-Powered-URL-Health-Checker-Generator cd LLM-Powered-URL-Health-Checker-Generator
# 2. Install dependencies # 2. Install dependencies
pip install openai anthropic google-generativeai python-dotenv requests pip install openai anthropic google-generativeai python-dotenv requests
# 3. Configure environment (see provider examples below) # 3. Configure environment (see provider examples below)
cp .env.example .env cp .env.example .env
# Edit .env with your preferred provider # Edit .env with your preferred provider
# 4. Run the notebook to generate the tool # 4. Run the notebook to generate the tool
jupyter notebook Final.ipynb jupyter notebook Final.ipynb
# 5. Use the generated URL checker # 5. Use the generated URL checker
echo "https://github.com" > urls.txt echo "https://github.com" > urls.txt
python url_checker.py --input urls.txt --output results.csv python url_checker.py --input urls.txt --output results.csv
``` ```
## 📋 Prerequisites ## 📋 Prerequisites
- Python 3.10+ - Python 3.10+
- Access to at least one LLM API (see supported providers) - Access to at least one LLM API (see supported providers)
- Required Python packages (see installation) - Required Python packages (see installation)
## 🔧 Detailed Installation ## 🔧 Detailed Installation
### 1. **Clone the Repository** ### 1. **Clone the Repository**
```bash ```bash
git clone https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator.git git clone https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator.git
cd LLM-Powered-URL-Health-Checker-Generator cd LLM-Powered-URL-Health-Checker-Generator
``` ```
### 2. **Set Up Python Environment** (Recommended) ### 2. **Set Up Python Environment** (Recommended)
```bash ```bash
# Create virtual environment # Create virtual environment
python -m venv venv python -m venv venv
# Activate it (Windows) # Activate it (Windows)
venv\Scripts\activate venv\Scripts\activate
# Or on Linux/Mac # Or on Linux/Mac
source venv/bin/activate source venv/bin/activate
``` ```
### 3. **Install Dependencies** ### 3. **Install Dependencies**
```bash ```bash
# Core dependencies # Core dependencies
pip install python-dotenv requests jupyter pip install python-dotenv requests jupyter
# Install based on your provider: # Install based on your provider:
pip install openai # For OpenAI pip install openai # For OpenAI
pip install anthropic # For Claude pip install anthropic # For Claude
pip install google-generativeai # For Gemini pip install google-generativeai # For Gemini
# Or install all: # Or install all:
pip install -r requirements.txt pip install -r requirements.txt
``` ```
### 4. **Configure Environment Variables** ### 4. **Configure Environment Variables**
Create a `.env` file based on your chosen provider: Create a `.env` file based on your chosen provider:
#### OpenAI Configuration #### OpenAI Configuration
```env ```env
LLM_PROVIDER=openai LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your_api_key_here OPENAI_API_KEY=sk-your_api_key_here
BASE_URL=https://api.openai.com/v1 BASE_URL=https://api.openai.com/v1
MODEL_NAME=gpt-4 MODEL_NAME=gpt-4
``` ```
#### Anthropic Claude Configuration #### Anthropic Claude Configuration
```env ```env
LLM_PROVIDER=anthropic LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your_api_key_here ANTHROPIC_API_KEY=sk-ant-your_api_key_here
BASE_URL=https://api.anthropic.com BASE_URL=https://api.anthropic.com
MODEL_NAME=claude-3-opus-20240229 MODEL_NAME=claude-3-opus-20240229
``` ```
#### Google Gemini Configuration #### Google Gemini Configuration
```env ```env
LLM_PROVIDER=google LLM_PROVIDER=google
GOOGLE_API_KEY=your_api_key_here GOOGLE_API_KEY=your_api_key_here
MODEL_NAME=gemini-pro MODEL_NAME=gemini-pro
``` ```
#### Azure OpenAI Configuration #### Azure OpenAI Configuration
```env ```env
LLM_PROVIDER=azure LLM_PROVIDER=azure
AZURE_API_KEY=your_api_key_here AZURE_API_KEY=your_api_key_here
BASE_URL=https://your-resource.openai.azure.com BASE_URL=https://your-resource.openai.azure.com
AZURE_DEPLOYMENT_NAME=your-deployment AZURE_DEPLOYMENT_NAME=your-deployment
MODEL_NAME=gpt-4 MODEL_NAME=gpt-4
``` ```
#### Local Models (Ollama) Configuration #### Local Models (Ollama) Configuration
```env ```env
LLM_PROVIDER=ollama LLM_PROVIDER=ollama
BASE_URL=http://localhost:11434/v1 BASE_URL=http://localhost:11434/v1
MODEL_NAME=llama2:13b MODEL_NAME=llama2:13b
# No API key needed for local models # No API key needed for local models
``` ```
#### Custom OpenAI-Compatible API #### Custom OpenAI-Compatible API
```env ```env
LLM_PROVIDER=custom LLM_PROVIDER=custom
API_KEY=your_api_key_here API_KEY=your_api_key_here
BASE_URL=https://your-custom-endpoint.com/v1 BASE_URL=https://your-custom-endpoint.com/v1
MODEL_NAME=your-model-name MODEL_NAME=your-model-name
``` ```
## 🎯 Usage Guide ## 🎯 Usage Guide
### Step 1: Generate the URL Checker ### Step 1: Generate the URL Checker
1. **Open the Jupyter Notebook:** 1. **Open the Jupyter Notebook:**
```bash ```bash
jupyter notebook Final.ipynb jupyter notebook Final.ipynb
``` ```
2. **Select your LLM provider** in the configuration cell 2. **Select your LLM provider** in the configuration cell
3. **Execute all cells** to: 3. **Execute all cells** to:
- Configure your chosen LLM API connection - Configure your chosen LLM API connection
- Generate the URL health checker script - Generate the URL health checker script
- Save it as `url_checker.py` - Save it as `url_checker.py`
### Step 2: Prepare Your URL List ### Step 2: Prepare Your URL List
Create a `urls.txt` file with websites to monitor: Create a `urls.txt` file with websites to monitor:
```text ```text
https://www.google.com https://www.google.com
https://github.com https://github.com
https://stackoverflow.com https://stackoverflow.com
https://httpbin.org/status/200 https://httpbin.org/status/200
https://httpbin.org/delay/3 https://httpbin.org/delay/3
# Comments are ignored # Comments are ignored
https://nonexistent-website-12345.com https://nonexistent-website-12345.com
``` ```
### Step 3: Run the Health Checker ### Step 3: Run the Health Checker
```bash ```bash
# Basic usage # Basic usage
python url_checker.py python url_checker.py
# Custom parameters # Custom parameters
python url_checker.py --input my_sites.txt --output health_report.csv --timeout 10 --retries 5 python url_checker.py --input my_sites.txt --output health_report.csv
``` ```
## 📊 Example Output ## 📊 Example Output
### Console Output ### Console Output
``` ```
🔍 URL Health Checker Starting... 🔍 URL Health Checker Starting...
✅ https://www.google.com - 200 OK (156ms) ✅ https://www.google.com - 200 OK (156ms)
✅ https://github.com - 200 OK (245ms) ✅ https://github.com - 200 OK (245ms)
❌ https://nonexistent-site.com - Connection failed ❌ https://nonexistent-site.com - Connection failed
📊 Summary: 2/3 successful (66.7%), Average: 200ms 📊 Summary: 2/3 successful (66.7%), Average: 200ms
``` ```
### CSV Report (`results.csv`) ### CSV Report (`results.csv`)
```csv ```csv
url,status_code,response_time_ms,success url,status_code,response_time_ms,success
https://www.google.com,200,156.3,True https://www.google.com,200,156.3,True
https://github.com,200,245.7,True https://github.com,200,245.7,True
https://nonexistent-site.com,0,0.0,False https://nonexistent-site.com,0,0.0,False
``` ```
## 🛠️ Command Line Options ## 🛠️ Command Line Options
| Option | Default | Description | Example | | Option | Default | Description | Example |
|--------|---------|-------------|---------| |--------|---------|-------------|---------|
| `--input` | `urls.txt` | Input file containing URLs | `--input websites.txt` | | `--input` | `urls.txt` | Input file containing URLs | `--input websites.txt` |
| `--output` | `results.csv` | Output CSV file for results | `--output report.csv` | | `--output` | `results.csv` | Output CSV file for results | `--output report.csv` |
| `--timeout` | `5` | Request timeout in seconds | `--timeout 10` | | `--timeout` | `5` | Request timeout in seconds | `--timeout 10` |
| `--retries` | `3` | Number of retry attempts | `--retries 5` | | `--retries` | `3` | Number of retry attempts | `--retries 5` |
## 🏗️ Project Architecture ## 🏗️ Project Architecture
``` ```
LLM-Powered-URL-Health-Checker-Generator/ LLM-Powered-URL-Health-Checker-Generator/
├── 📓 Final.ipynb # Main LLM integration notebook ├── 📓 Final.ipynb # Main LLM integration notebook
├── 📄 README.md # This documentation ├── 📄 README.md # This documentation
├── 🐍 url_checker.py # Generated URL health checker ├── 🐍 url_checker.py # Generated URL health checker
├── ⚙️ .env # Environment variables (create this) ├── ⚙️ .env # Environment variables (create this)
├── ⚙️ .env.example # Example environment configuration ├── ⚙️ .env.example # Example environment configuration
├── 📝 urls.txt # Input URLs list ├── 📝 urls.txt # Input URLs list
├── 📊 results.csv # Output results ├── 📊 results.csv # Output results
├── 📦 requirements.txt # Python dependencies ├── 📦 requirements.txt # Python dependencies
└── 📜 .gitignore # Git ignore file └── 📜 .gitignore # Git ignore file
``` ```
## 🔐 Security & Best Practices ## 🔐 Security & Best Practices
### Environment Security ### Environment Security
- ✅ **Never commit `.env` files** - Add to `.gitignore` - ✅ **Never commit `.env` files** - Add to `.gitignore`
- ✅ **Use environment variables** for all API keys - ✅ **Use environment variables** for all API keys
- ✅ **Rotate API keys regularly** - ✅ **Rotate API keys regularly**
- ✅ **Monitor API usage** and set billing alerts - ✅ **Monitor API usage** and set billing alerts
- ✅ **Use provider-specific security features** (API key restrictions, rate limits) - ✅ **Use provider-specific security features** (API key restrictions, rate limits)
### Code Safety ### Code Safety
- ✅ **Review generated code** before execution - ✅ **Review generated code** before execution
- ✅ **Test with safe URLs** first - ✅ **Test with safe URLs** first
- ✅ **Use virtual environments** to isolate dependencies - ✅ **Use virtual environments** to isolate dependencies
- ✅ **Validate input files** before processing - ✅ **Validate input files** before processing
## 🤖 LLM Integration Details ## 🤖 LLM Integration Details
### Model Configuration Examples ### Model Configuration Examples
#### OpenAI GPT-4 #### OpenAI GPT-4
```python ```python
model="gpt-4" model="gpt-4"
temperature=0.2 # Low for consistent code temperature=0.2 # Low for consistent code
max_tokens=8000 # Balanced for complete scripts max_tokens=8000 # Balanced for complete scripts
``` ```
#### Claude 3 Opus #### Claude 3 Opus
```python ```python
model="claude-3-opus-20240229" model="claude-3-opus-20240229"
temperature=0.2 temperature=0.2
max_tokens=4000 max_tokens=4000
``` ```
#### Gemini Pro #### Gemini Pro
```python ```python
model="gemini-pro" model="gemini-pro"
temperature=0.2 temperature=0.2
max_tokens=8192 max_tokens=8192
``` ```
#### Local Models (Ollama) #### Local Models (Ollama)
```python ```python
model="codellama:13b" model="codellama:13b"
temperature=0.1 # Very low for code consistency temperature=0.1 # Very low for code consistency
max_tokens=4096 max_tokens=4096
``` ```
### Prompt Engineering ### Prompt Engineering
The project uses carefully crafted prompts optimized for each provider to ensure: The project uses carefully crafted prompts optimized for each provider to ensure:
- 🎯 **Focused output** - Only relevant code generation - 🎯 **Focused output** - Only relevant code generation
- 🔧 **Practical functionality** - Real-world utility - 🔧 **Practical functionality** - Real-world utility
- 📚 **Documentation** - Self-explaining code - 📚 **Documentation** - Self-explaining code
- ⚡ **Efficiency** - Optimized performance - ⚡ **Efficiency** - Optimized performance
- 🔄 **Provider compatibility** - Works across different LLMs - 🔄 **Provider compatibility** - Works across different LLMs
## 📈 Use Cases ## 📈 Use Cases
### DevOps & Monitoring ### DevOps & Monitoring
- **Website uptime monitoring** - **Website uptime monitoring**
- **API endpoint health checks** - **API endpoint health checks**
- **Load balancer status verification** - **Load balancer status verification**
- **CDN performance testing** - **CDN performance testing**
- **Multi-region availability checking** - **Multi-region availability checking**
### Development & Testing ### Development & Testing
- **Integration test automation** - **Integration test automation**
- **Environment validation** - **Environment validation**
- **Dependency verification** - **Dependency verification**
- **Service discovery** - **Service discovery**
- **CI/CD pipeline validation** - **CI/CD pipeline validation**
## 🤝 Contributing ## 🤝 Contributing
We welcome contributions! Here's how to get started: We welcome contributions! Here's how to get started:
### 1. Fork & Clone ### 1. Fork & Clone
```bash ```bash
# Fork the repository on GitPasha first, then: # Fork the repository on GitPasha first, then:
git clone https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator.git git clone https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator.git
``` ```
### 2. Create Feature Branch ### 2. Create Feature Branch
```bash ```bash
git checkout -b feature/amazing-new-feature git checkout -b feature/amazing-new-feature
``` ```
### 3. Make Changes ### 3. Make Changes
- Add support for new LLM providers - Add support for new LLM providers
- Improve prompt engineering - Improve prompt engineering
- Add new tool generation capabilities - Add new tool generation capabilities
- Update documentation - Update documentation
- Add tests if applicable - Add tests if applicable
### 4. Submit Pull Request ### 4. Submit Pull Request
```bash ```bash
git add . git add .
git commit -m "✨ Add amazing new feature" git commit -m "✨ Add amazing new feature"
git push origin feature/amazing-new-feature git push origin feature/amazing-new-feature
``` ```
## 🙏 Acknowledgments ## 🙏 Acknowledgments
- **LLM Providers** - OpenAI, Anthropic, Google, and the open-source community - **LLM Providers** - OpenAI, Anthropic, Google, and the open-source community
- **Python Community** - For excellent libraries (`requests`, `python-dotenv`) - **Python Community** - For excellent libraries (`requests`, `python-dotenv`)
- **Jupyter Project** - For the interactive development environment - **Jupyter Project** - For the interactive development environment
- **Open Source Models** - LLaMA, Mistral, and other contributors - **Open Source Models** - LLaMA, Mistral, and other contributors
- **Contributors** - Everyone who helps improve this project - **Contributors** - Everyone who helps improve this project
## 📞 Support & Community ## 📞 Support & Community
### Getting Help ### Getting Help
- 🐛 **Bug Reports**: [GitPasha Issues](https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator/issues) - 🐛 **Bug Reports**: [GitPasha Issues](https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator/issues)
- 💡 **Feature Requests**: [GitPasha Discussions](https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator/discussions) - 💡 **Feature Requests**: [GitPasha Discussions](https://app.gitpasha.com/MahmoudZaafan/LLM-Powered-URL-Health-Checker-Generator/discussions)
- 📖 **Documentation**: This README and notebook comments - 📖 **Documentation**: This README and notebook comments
- 💬 **Community**: Join our discussions for tips and tricks - 💬 **Community**: Join our discussions for tips and tricks
### Troubleshooting ### Troubleshooting
| Problem | Solution | | Problem | Solution |
|---------|----------| |---------|----------|
| `ModuleNotFoundError` | Run `pip install -r requirements.txt` | | `ModuleNotFoundError` | Run `pip install -r requirements.txt` |
| `API Key Error` | Check your `.env` file and provider configuration | | `API Key Error` | Check your `.env` file and provider configuration |
| `Timeout Issues` | Increase `--timeout` parameter or switch providers | | `Timeout Issues` | Increase `--timeout` parameter or switch providers |
| `Generation Fails` | Verify API credits, permissions, and model availability | | `Generation Fails` | Verify API credits, permissions, and model availability |
| `Provider Not Working` | Check API end | `Provider Not Working` | Check API end