382 أسطر
12 KiB
Markdown
382 أسطر
12 KiB
Markdown
# 🚀 LLM-Powered URL Health Checker Generator
|
|
|
|
[](https://python.org)
|
|
[](https://github.com)
|
|
[](https://opensource.org/licenses/MIT)
|
|
[](https://jupyter.org)
|
|
|
|
🤖 **AI-Powered Code Generation** | 🌐 **URL Health Monitoring** | 📊 **DevOps Tooling** | ⚡ **Automated CLI Tools**
|
|
|
|
> **Transform ideas into code with ANY LLM API!** This project showcases how to harness various Large Language Model APIs (OpenAI, Anthropic Claude, Google Gemini, Azure, local models, and more) to automatically generate production-ready Python CLI tools. Watch as AI creates a sophisticated URL health monitoring system that rivals hand-coded solutions.
|
|
|
|
## 🎯 What This Project Does
|
|
|
|
This project demonstrates cutting-edge AI-assisted development by:
|
|
- 🧠 **Intelligently generating** complete Python applications using any LLM API
|
|
- 🔍 **Creating a professional URL health checker** with advanced monitoring capabilities
|
|
- 📈 **Showcasing practical AI applications** in DevOps and system administration
|
|
- 🛠️ **Providing a universal framework** for AI-powered code generation across multiple providers
|
|
|
|
## 🌟 Supported LLM Providers
|
|
|
|
| Provider | Models | Configuration |
|
|
|----------|---------|--------------|
|
|
| **OpenAI** | GPT-4, GPT-3.5-Turbo | Standard OpenAI API |
|
|
| **Anthropic** | Claude 3 Opus, Sonnet, Haiku | Claude API |
|
|
| **Google** | Gemini Pro, Gemini Ultra | Google AI Studio |
|
|
| **Azure OpenAI** | GPT-4, GPT-3.5 | Azure OpenAI Service |
|
|
| **Cohere** | Command, Generate | Cohere API |
|
|
| **Hugging Face** | Various open models | Inference API |
|
|
| **Local Models** | LLaMA, Mistral, etc. | Ollama, LM Studio |
|
|
| **Custom APIs** | Any OpenAI-compatible | Custom endpoints |
|
|
|
|
## 🚀 Features
|
|
|
|
### Core Framework
|
|
- **Universal LLM Support**: Works with any OpenAI-compatible API endpoint
|
|
- **Provider Flexibility**: Easy switching between different LLM providers
|
|
- **Smart Configuration Management**: Secure environment variable handling with validation
|
|
- **AI-Powered Code Generation**: Uses your choice of LLM to generate complete, functional Python scripts
|
|
- **Modular Design**: Clean, reusable configuration class for API integration
|
|
- **Error Handling**: Robust validation and error management across providers
|
|
|
|
### Generated URL Health Checker
|
|
- **Batch URL Testing**: Process multiple URLs from input files
|
|
- **Performance Monitoring**: Measure and report response times
|
|
- **Status Code Tracking**: Monitor HTTP response codes
|
|
- **Configurable Parameters**: Customizable timeouts and retry mechanisms
|
|
- **CSV Export**: Structured data output for analysis
|
|
- **Summary Statistics**: Real-time reporting of test results
|
|
- **CLI Interface**: Professional command-line tool with argument parsing
|
|
|
|
## ⚡ Quick Start
|
|
|
|
```bash
|
|
# 1. Clone and setup
|
|
git clone https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator.git
|
|
cd LLM-URL-Health-Checker-Generator
|
|
|
|
# 2. Install dependencies
|
|
pip install openai anthropic google-generativeai python-dotenv requests
|
|
|
|
# 3. Configure environment (see provider examples below)
|
|
cp .env.example .env
|
|
# Edit .env with your preferred provider
|
|
|
|
# 4. Run the notebook to generate the tool
|
|
jupyter notebook Final.ipynb
|
|
|
|
# 5. Use the generated URL checker
|
|
echo "https://github.com" > urls.txt
|
|
python url_checker.py --input urls.txt --output results.csv
|
|
```
|
|
|
|
## 📋 Prerequisites
|
|
|
|
- Python 3.10+
|
|
- Access to at least one LLM API (see supported providers)
|
|
- Required Python packages (see installation)
|
|
|
|
## 🔧 Detailed Installation
|
|
|
|
### 1. **Clone the Repository**
|
|
```bash
|
|
git clone https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator.git
|
|
cd LLM-URL-Health-Checker-Generator
|
|
```
|
|
|
|
### 2. **Set Up Python Environment** (Recommended)
|
|
```bash
|
|
# Create virtual environment
|
|
python -m venv venv
|
|
|
|
# Activate it (Windows)
|
|
venv\Scripts\activate
|
|
# Or on Linux/Mac
|
|
source venv/bin/activate
|
|
```
|
|
|
|
### 3. **Install Dependencies**
|
|
```bash
|
|
# Core dependencies
|
|
pip install python-dotenv requests jupyter
|
|
|
|
# Install based on your provider:
|
|
pip install openai # For OpenAI
|
|
pip install anthropic # For Claude
|
|
pip install google-generativeai # For Gemini
|
|
# Or install all:
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
### 4. **Configure Environment Variables**
|
|
|
|
Create a `.env` file based on your chosen provider:
|
|
|
|
#### OpenAI Configuration
|
|
```env
|
|
LLM_PROVIDER=openai
|
|
OPENAI_API_KEY=sk-your_api_key_here
|
|
BASE_URL=https://api.openai.com/v1
|
|
MODEL_NAME=gpt-4
|
|
```
|
|
|
|
#### Anthropic Claude Configuration
|
|
```env
|
|
LLM_PROVIDER=anthropic
|
|
ANTHROPIC_API_KEY=sk-ant-your_api_key_here
|
|
BASE_URL=https://api.anthropic.com
|
|
MODEL_NAME=claude-3-opus-20240229
|
|
```
|
|
|
|
#### Google Gemini Configuration
|
|
```env
|
|
LLM_PROVIDER=google
|
|
GOOGLE_API_KEY=your_api_key_here
|
|
MODEL_NAME=gemini-pro
|
|
```
|
|
|
|
#### Azure OpenAI Configuration
|
|
```env
|
|
LLM_PROVIDER=azure
|
|
AZURE_API_KEY=your_api_key_here
|
|
BASE_URL=https://your-resource.openai.azure.com
|
|
AZURE_DEPLOYMENT_NAME=your-deployment
|
|
MODEL_NAME=gpt-4
|
|
```
|
|
|
|
#### Local Models (Ollama) Configuration
|
|
```env
|
|
LLM_PROVIDER=ollama
|
|
BASE_URL=http://localhost:11434/v1
|
|
MODEL_NAME=llama2:13b
|
|
# No API key needed for local models
|
|
```
|
|
|
|
#### Custom OpenAI-Compatible API
|
|
```env
|
|
LLM_PROVIDER=custom
|
|
API_KEY=your_api_key_here
|
|
BASE_URL=https://your-custom-endpoint.com/v1
|
|
MODEL_NAME=your-model-name
|
|
```
|
|
|
|
## 🎯 Usage Guide
|
|
|
|
### Step 1: Generate the URL Checker
|
|
1. **Open the Jupyter Notebook:**
|
|
```bash
|
|
jupyter notebook Final.ipynb
|
|
```
|
|
|
|
2. **Select your LLM provider** in the configuration cell
|
|
|
|
3. **Execute all cells** to:
|
|
- Configure your chosen LLM API connection
|
|
- Generate the URL health checker script
|
|
- Save it as `url_checker.py`
|
|
|
|
### Step 2: Prepare Your URL List
|
|
Create a `urls.txt` file with websites to monitor:
|
|
```text
|
|
https://www.google.com
|
|
https://github.com
|
|
https://stackoverflow.com
|
|
https://httpbin.org/status/200
|
|
https://httpbin.org/delay/3
|
|
# Comments are ignored
|
|
https://nonexistent-website-12345.com
|
|
```
|
|
|
|
### Step 3: Run the Health Checker
|
|
```bash
|
|
# Basic usage
|
|
python url_checker.py
|
|
|
|
# Custom parameters
|
|
python url_checker.py --input my_sites.txt --output health_report.csv --timeout 10 --retries 5
|
|
```
|
|
|
|
## 📊 Example Output
|
|
|
|
### Console Output
|
|
```
|
|
🔍 URL Health Checker Starting...
|
|
✅ https://www.google.com - 200 OK (156ms)
|
|
✅ https://github.com - 200 OK (245ms)
|
|
❌ https://nonexistent-site.com - Connection failed
|
|
📊 Summary: 2/3 successful (66.7%), Average: 200ms
|
|
```
|
|
|
|
### CSV Report (`results.csv`)
|
|
```csv
|
|
url,status_code,response_time_ms,success
|
|
https://www.google.com,200,156.3,True
|
|
https://github.com,200,245.7,True
|
|
https://nonexistent-site.com,0,0.0,False
|
|
```
|
|
|
|
## 🛠️ Command Line Options
|
|
|
|
| Option | Default | Description | Example |
|
|
|--------|---------|-------------|---------|
|
|
| `--input` | `urls.txt` | Input file containing URLs | `--input websites.txt` |
|
|
| `--output` | `results.csv` | Output CSV file for results | `--output report.csv` |
|
|
| `--timeout` | `5` | Request timeout in seconds | `--timeout 10` |
|
|
| `--retries` | `3` | Number of retry attempts | `--retries 5` |
|
|
|
|
## 🏗️ Project Architecture
|
|
|
|
```
|
|
LLM-URL-Health-Checker-Generator/
|
|
├── 📓 Final.ipynb # Main LLM integration notebook
|
|
├── 📄 README.md # This documentation
|
|
├── 🐍 url_checker.py # Generated URL health checker
|
|
├── ⚙️ .env # Environment variables (create this)
|
|
├── ⚙️ .env.example # Example environment configuration
|
|
├── 📝 urls.txt # Input URLs list
|
|
├── 📊 results.csv # Output results
|
|
├── 📦 requirements.txt # Python dependencies
|
|
└── 📜 .gitignore # Git ignore file
|
|
```
|
|
|
|
## 🔐 Security & Best Practices
|
|
|
|
### Environment Security
|
|
- ✅ **Never commit `.env` files** - Add to `.gitignore`
|
|
- ✅ **Use environment variables** for all API keys
|
|
- ✅ **Rotate API keys regularly**
|
|
- ✅ **Monitor API usage** and set billing alerts
|
|
- ✅ **Use provider-specific security features** (API key restrictions, rate limits)
|
|
|
|
### Code Safety
|
|
- ✅ **Review generated code** before execution
|
|
- ✅ **Test with safe URLs** first
|
|
- ✅ **Use virtual environments** to isolate dependencies
|
|
- ✅ **Validate input files** before processing
|
|
|
|
|
|
|
|
## 🤖 LLM Integration Details
|
|
|
|
### Model Configuration Examples
|
|
|
|
#### OpenAI GPT-4
|
|
```python
|
|
model="gpt-4"
|
|
temperature=0.2 # Low for consistent code
|
|
max_tokens=8000 # Balanced for complete scripts
|
|
```
|
|
|
|
#### Claude 3 Opus
|
|
```python
|
|
model="claude-3-opus-20240229"
|
|
temperature=0.2
|
|
max_tokens=4000
|
|
```
|
|
|
|
#### Gemini Pro
|
|
```python
|
|
model="gemini-pro"
|
|
temperature=0.2
|
|
max_tokens=8192
|
|
```
|
|
|
|
#### Local Models (Ollama)
|
|
```python
|
|
model="codellama:13b"
|
|
temperature=0.1 # Very low for code consistency
|
|
max_tokens=4096
|
|
```
|
|
|
|
### Prompt Engineering
|
|
The project uses carefully crafted prompts optimized for each provider to ensure:
|
|
- 🎯 **Focused output** - Only relevant code generation
|
|
- 🔧 **Practical functionality** - Real-world utility
|
|
- 📚 **Documentation** - Self-explaining code
|
|
- ⚡ **Efficiency** - Optimized performance
|
|
- 🔄 **Provider compatibility** - Works across different LLMs
|
|
|
|
## 📈 Use Cases
|
|
|
|
### DevOps & Monitoring
|
|
- **Website uptime monitoring**
|
|
- **API endpoint health checks**
|
|
- **Load balancer status verification**
|
|
- **CDN performance testing**
|
|
- **Multi-region availability checking**
|
|
|
|
### Development & Testing
|
|
- **Integration test automation**
|
|
- **Environment validation**
|
|
- **Dependency verification**
|
|
- **Service discovery**
|
|
- **CI/CD pipeline validation**
|
|
|
|
|
|
## 🤝 Contributing
|
|
|
|
We welcome contributions! Here's how to get started:
|
|
|
|
### 1. Fork & Clone
|
|
```bash
|
|
# Fork the repository on GitHub first, then:
|
|
git clone https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator.git
|
|
```
|
|
|
|
### 2. Create Feature Branch
|
|
```bash
|
|
git checkout -b feature/amazing-new-feature
|
|
```
|
|
### 3. Make Changes
|
|
- Add support for new LLM providers
|
|
- Improve prompt engineering
|
|
- Add new tool generation capabilities
|
|
- Update documentation
|
|
- Add tests if applicable
|
|
|
|
### 4. Submit Pull Request
|
|
```bash
|
|
git add .
|
|
git commit -m "✨ Add amazing new feature"
|
|
git push origin feature/amazing-new-feature
|
|
```
|
|
|
|
|
|
## 🙏 Acknowledgments
|
|
|
|
- **LLM Providers** - OpenAI, Anthropic, Google, and the open-source community
|
|
- **Python Community** - For excellent libraries (`requests`, `python-dotenv`)
|
|
- **Jupyter Project** - For the interactive development environment
|
|
- **Open Source Models** - LLaMA, Mistral, and other contributors
|
|
- **Contributors** - Everyone who helps improve this project
|
|
|
|
## 📞 Support & Community
|
|
|
|
### Getting Help
|
|
- 🐛 **Bug Reports**: [GitHub Issues](https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator/issues)
|
|
- 💡 **Feature Requests**: [GitHub Discussions](https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator/discussions)
|
|
- 📖 **Documentation**: This README and notebook comments
|
|
- 💬 **Community**: Join our discussions for tips and tricks
|
|
|
|
### Troubleshooting
|
|
| Problem | Solution |
|
|
|---------|----------|
|
|
| `ModuleNotFoundError` | Run `pip install -r requirements.txt` |
|
|
| `API Key Error` | Check your `.env` file and provider configuration |
|
|
| `Timeout Issues` | Increase `--timeout` parameter or switch providers |
|
|
| `Generation Fails` | Verify API credits, permissions, and model availability |
|
|
| `Provider Not Working` | Check API endpoint, model name, and authentication |
|
|
|
|
---
|
|
|
|
<div align="center">
|
|
|
|
**🌟 Star this repository if you find it useful!**
|
|
|
|
**Made with ❤️ and AI** | *Supporting ALL Major LLM Providers*
|
|
|
|
[](https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator/stargazers)
|
|
[](https://github.com/Mahmoud-Zaafan/LLM-URL-Health-Checker-Generator/network)
|
|
|
|
</div> |