Quick Start Guide¶
Get AlertSage running in 5 minutes!
Prerequisites¶
- Python 3.11+ (check with
python --version) - Git (check with
git --version) - ~500 MB free disk space for models and dataset
Installation¶
1. Clone the Repository¶
2. Create Virtual Environment¶
# macOS/Linux
python3.11 -m venv .venv
source .venv/bin/activate
# Windows
python -m venv .venv
.venv\Scripts\activate
3. Install Package¶
This installs AlertSage in editable mode with all dependencies.
Verify Installation¶
Run Tests¶
You should see all 9 tests pass. The first run will automatically download the model artifacts (~10 MB).
Test CLI¶
You should see a formatted output with classification results.
Next Steps¶
Try the Streamlit UI¶
Your browser will open to http://localhost:8501 with an interactive dashboard.
Explore Notebooks¶
Start with 01_explore_dataset.ipynb to see the full workflow.
Generate Custom Dataset¶
Common Issues¶
"Module 'triage' not found"¶
Make sure you ran pip install -e ".[dev]" and your virtual environment is activated.
"No such file: cyber_incidents_simulated.csv"¶
The dataset auto-downloads when you run tests or notebooks. Manually download if needed:
"Python 3.11 not found"¶
Install Python 3.11+ from python.org or use a version manager like pyenv.
Port 8501 already in use (Streamlit)¶
Quick Command Reference¶
# CLI with threshold adjustment
nlp-triage --threshold 0.7 "Website experiencing slowdowns"
# JSON output for scripting
nlp-triage --json "Multiple failed login attempts"
# LLM second opinion (requires LLM model)
nlp-triage --llm-second-opinion "Server encrypting files"
# Run all tests
pytest -v
# Check code coverage
pytest --cov=src/triage --cov-report=term-missing
# Preview documentation
mkdocs serve
## Set up a local LLM (one-time)
LLM features require a local GGUF model and `llama-cpp-python`:
```bash
# Create a models folder
mkdir -p models
# macOS (Metal GPU):
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# Download a model (choose one)
huggingface-cli download TheBloke/Llama-3.1-8B-Instruct-GGUF \
Llama-3.1-8B-Instruct-Q6_K.gguf --local-dir models
# Or smaller alternatives:
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.2-GGUF \
mistral-7b-instruct-v0.2.Q6_K.gguf --local-dir models
huggingface-cli download TinyLlama/TinyLlama-1.1B-Chat-v1.0-GGUF \
TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf --local-dir models
# Point the app to your model
export TRIAGE_LLM_MODEL="$(pwd)/models/Llama-3.1-8B-Instruct-Q6_K.gguf"
# Test second opinion
nlp-triage --llm-second-opinion "Suspicious activity detected"
Tip: Some models require Hugging Face login and license acceptance. ```
Documentation¶
- Full Documentation: https://texasbe2trill.github.io/AlertSage/
- CLI Guide: docs/cli.md
- UI Guide: docs/ui-guide.md
- Development: docs/development.md
Getting Help¶
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Contributing: Contributing
What's Next?¶
- ✅ Read the Overview
- ✅ Try different CLI options and thresholds
- ✅ Explore the Streamlit UI features
- ✅ Walk through the Jupyter notebooks
- ✅ Generate your own synthetic dataset
- ✅ Read the documentation site
Happy triaging! 🛡️