Compare commits

...

31 Commits

Author SHA1 Message Date
Ryan Walters
966732a6d2 feat: modernize build tooling and add CI/CD workflow
Switch to Bun for 2-5x faster frontend builds, implement cargo-chef for
reliable Rust dependency caching, and add Biome for fast code
formatting.

Build system improvements:
- Replace pnpm with Bun for frontend package management
- Add cargo-chef to Dockerfile for better Rust build layer caching
- Update all commands to use bun instead of pnpm

Developer experience:
- Add comprehensive Justfile commands (format, format-check, db)
- Implement automated PostgreSQL Docker setup with random port
allocation
- Add stricter checks (--deny warnings on clippy, --all-features flag)

Code quality:
- Add Biome formatter for 10-100x faster TypeScript/JavaScript
formatting
- Add GitHub Actions CI/CD workflow for automated checks
- Update .dockerignore with comprehensive exclusions
- Format all code with cargo fmt (Rust) and Biome (TypeScript)

All changes maintain backward compatibility and can be tested
incrementally.
2025-11-18 18:59:03 -06:00
Ryan Walters
3292d35521 build(docker): copy migrations directory to build context
Ensures database migration files are available during the Docker build process.
2025-11-03 12:07:27 -06:00
Ryan Walters
71ac0782d0 feat(json): enhance error context with debug mode detailed reporting
Improve JSON parsing error messages with build-specific behavior:
- Debug builds: Show full parent object context and type mismatch details
- Release builds: Keep minimal snippets to avoid log spam

Add comprehensive test coverage for error handling and path parsing.
2025-11-03 12:04:20 -06:00
Ryan Walters
1c6d2d4b6e perf: implement batch operations and optimize database indexes
Add batch upsert functionality to reduce database round-trips from N to 1 when inserting courses. Create comprehensive database indexes for common query patterns including term/subject lookups, time-series metrics, and job scheduling. Remove redundant indexes and add monitoring guidance for BRIN index effectiveness.
2025-11-03 11:18:42 -06:00
Ryan Walters
51f8256e61 feat: implement comprehensive retry mechanism and improve observability
Add retry tracking to scrape jobs with configurable max retries (default 5), implement
automatic database migrations on startup, and significantly reduce logging noise from
infrastructure layers. Enhanced tracing with structured spans for better debugging while
keeping output readable by suppressing verbose trace logs from rate limiters and session
management. Improved error handling with detailed retry context and proper session cookie
validation.
2025-11-03 10:18:07 -06:00
Ryan Walters
b1ed2434f8 feat: add ESLint configuration and testing infrastructure
Add comprehensive ESLint setup with React and TypeScript support, create basic integration tests for the shutdown utilities, and enhance the Justfile with a new check command that runs all validation steps (cargo check, clippy, tests, and linting).
2025-11-03 02:21:35 -06:00
Ryan Walters
47c23459f1 refactor: implement comprehensive graceful shutdown across all services
Implements graceful shutdown with broadcast channels and proper timeout handling
for scraper workers, scheduler, bot service, and status update tasks. Introduces
centralized shutdown utilities and improves service manager to handle parallel
shutdown with per-service timeouts instead of shared timeout budgets.

Key changes:
- Add utils module with shutdown helper functions
- Update ScraperService to return errors on shutdown failures
- Refactor scheduler with cancellable work tasks and 5s grace period
- Extract worker shutdown logic into helper methods for clarity
- Add broadcast channel shutdown support to BotService and status task
- Improve ServiceManager to shutdown services in parallel with individual timeouts
2025-11-03 02:10:01 -06:00
Ryan Walters
8af9b0a1a2 refactor(scraper): implement graceful shutdown with broadcast channels
Replace task abortion with broadcast-based graceful shutdown for scheduler and workers. Implement cancellation tokens for in-progress work with 5s timeout. Add tokio-util dependency for CancellationToken support. Update ServiceManager to use completion channels and abort handles for better service lifecycle control.
2025-11-03 01:22:12 -06:00
020a00254f chore: improve database pool connection options, tighter thresholds & limits 2025-09-14 12:18:39 -05:00
45de5be60d refactor: redistribute main.rs into new modules for app & service initialization 2025-09-14 12:18:15 -05:00
8384f418c8 refactor: remove unused/dead code, apply allowances to the rest 2025-09-14 01:57:30 -05:00
3dca896a35 feat(web): add 10 second timeout layer 2025-09-14 01:47:52 -05:00
1b7d2d2824 fix: make version retrieval search current dir, add basic logs, existence check 2025-09-13 22:08:48 -05:00
e370008d75 fix: pass RAILWAY_GIT_COMMIT_SHA through Docker, provide Cargo.toml for frontend (version retrieval) 2025-09-13 22:04:44 -05:00
176574343f fix: provide proper theme-based colors to all elements necessary 2025-09-13 21:57:56 -05:00
91899bb109 fix: limit devtools panel to dev mode 2025-09-13 21:52:14 -05:00
08ae54c093 fix: use wildcard COPY for .git directory, use RAILWAY_GIT_COMMIT_SHA as fallback 2025-09-13 21:20:16 -05:00
33b8681b19 chore: use locale-based number formatting 2025-09-13 21:12:13 -05:00
398a1b9474 feat: dark mode with theme toggle button 2025-09-13 21:11:16 -05:00
a732ff9a15 feat: better frontend state implementation, acquire version in frontend build time 2025-09-13 20:29:18 -05:00
bfcd868337 refactor: proper implementation of services status, better styling/appearance/logic 2025-09-13 19:34:34 -05:00
99f0d0bc49 fix: add build.rs and .git dir to Dockerfile COPY build step, add git dependency 2025-09-13 19:09:27 -05:00
8b7729788d chore: replace template properties 2025-09-13 19:02:01 -05:00
27b0cb877e feat: display project version on frontend 2025-09-13 18:58:35 -05:00
8ec2f7d36f chore: bump version to 0.3.2 2025-09-13 18:52:23 -05:00
28a8a15b6b feat: embed git commit into binary, provide link on frontend 2025-09-13 18:51:48 -05:00
19b3a98f66 feat: setup span recording for CustomJsonFormatter, use 'yansi' for better ANSI terminal colors in CustomPrettyFormatter 2025-09-13 18:40:55 -05:00
b64aa41b14 feat: better profile-based router assembly, tracing layer for responses with span-based request paths 2025-09-13 18:03:20 -05:00
64449e8976 feat: setup pretty frontend for system status 2025-09-13 17:49:35 -05:00
2e0fefa5ee feat: implement interval backoff for presence indicator 2025-09-13 16:15:33 -05:00
97488494fb chore: bump version to 0.3.0 2025-09-13 15:52:40 -05:00
62 changed files with 4403 additions and 5769 deletions

View File

@@ -13,6 +13,16 @@ go/
# Development configuration
bacon.toml
.env
.env.*
!.env.example
# CI/CD
.github/
.git/
# Development tools
Justfile
rust-toolchain.toml
# Frontend build artifacts and cache
web/node_modules/
@@ -20,4 +30,22 @@ web/dist/
web/.vite/
web/.tanstack/
web/.vscode/
# IDE and editor files
.vscode/
.idea/
*.swp
*.swo
*~
# OS files
.DS_Store
Thumbs.db
# Test coverage
coverage/
*.profdata
*.profraw
# SQLx offline mode (include this in builds)
!.sqlx/

65
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: CI
on:
push:
branches: [master]
pull_request:
branches: [master]
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt, clippy
- name: Setup Bun
uses: oven-sh/setup-bun@v1
with:
bun-version: latest
- name: Cache Rust dependencies
uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
- name: Install frontend dependencies
working-directory: web
run: bun install --frozen-lockfile
- name: Check Rust formatting
run: cargo fmt --all -- --check
- name: Check TypeScript formatting
working-directory: web
run: bun run format:check
- name: TypeScript type check
working-directory: web
run: bun run typecheck
- name: ESLint
working-directory: web
run: bun run lint
- name: Clippy
run: cargo clippy --all-features -- --deny warnings
- name: Run tests
run: cargo test --all-features
- name: Build frontend
working-directory: web
run: bun run build
- name: Build backend
run: cargo build --release --bin banner

4
Cargo.lock generated
View File

@@ -218,7 +218,7 @@ dependencies = [
[[package]]
name = "banner"
version = "0.2.3"
version = "0.3.4"
dependencies = [
"anyhow",
"async-trait",
@@ -254,10 +254,12 @@ dependencies = [
"time",
"tl",
"tokio",
"tokio-util",
"tower-http",
"tracing",
"tracing-subscriber",
"url",
"yansi",
]
[[package]]

View File

@@ -1,6 +1,6 @@
[package]
name = "banner"
version = "0.2.3"
version = "0.3.4"
edition = "2024"
default-run = "banner"
@@ -36,6 +36,7 @@ sqlx = { version = "0.8.6", features = [
thiserror = "2.0.16"
time = "0.3.43"
tokio = { version = "1.47.1", features = ["full"] }
tokio-util = "0.7"
tl = "0.7.8"
tracing = "0.1.41"
tracing-subscriber = { version = "0.3.20", features = ["env-filter", "json"] }
@@ -44,11 +45,12 @@ governor = "0.10.1"
once_cell = "1.21.3"
serde_path_to_error = "0.1.17"
num-format = "0.4.4"
tower-http = { version = "0.6.0", features = ["fs", "cors", "trace"] }
tower-http = { version = "0.6.0", features = ["fs", "cors", "trace", "timeout"] }
rust-embed = { version = "8.0", features = ["debug-embed", "include-exclude"] }
mime_guess = "2.0"
clap = { version = "4.5", features = ["derive"] }
rapidhash = "4.1.0"
yansi = "1.0.1"
[dev-dependencies]

View File

@@ -1,60 +1,72 @@
# Build arguments
ARG RUST_VERSION=1.89.0
ARG RAILWAY_GIT_COMMIT_SHA
# Frontend Build Stage
FROM node:22-bookworm-slim AS frontend-builder
# Install pnpm
RUN npm install -g pnpm
# --- Frontend Build Stage ---
FROM oven/bun:1 AS frontend-builder
WORKDIR /app
# Copy backend Cargo.toml for build-time version retrieval
COPY ./Cargo.toml ./
# Copy frontend package files
COPY ./web/package.json ./web/pnpm-lock.yaml ./
COPY ./web/package.json ./web/bun.lock* ./
# Install dependencies
RUN pnpm install --frozen-lockfile
RUN bun install --frozen-lockfile
# Copy frontend source code
COPY ./web ./
# Build frontend
RUN pnpm run build
RUN bun run build
# Rust Build Stage
FROM rust:${RUST_VERSION}-bookworm AS builder
# --- Chef Base Stage ---
FROM lukemathwalker/cargo-chef:latest-rust-${RUST_VERSION} AS chef
WORKDIR /app
# Install build dependencies
# --- Planner Stage ---
FROM chef AS planner
COPY Cargo.toml Cargo.lock ./
COPY build.rs ./
COPY src ./src
# Migrations & .sqlx specifically left out to avoid invalidating cache
RUN cargo chef prepare --recipe-path recipe.json --bin banner
# --- Rust Build Stage ---
FROM chef AS builder
# Set build-time environment variable for Railway Git commit SHA
ARG RAILWAY_GIT_COMMIT_SHA
ENV RAILWAY_GIT_COMMIT_SHA=${RAILWAY_GIT_COMMIT_SHA}
# Copy recipe from planner and build dependencies only
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json --bin banner
# Install build dependencies for final compilation
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
git \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src
RUN USER=root cargo new --bin banner
WORKDIR /usr/src/banner
# Copy dependency files for better layer caching
COPY ./Cargo.toml ./Cargo.lock* ./
# Build empty app with downloaded dependencies to produce a stable image layer for next build
RUN cargo build --release
# Copy source code
RUN rm src/*.rs
COPY ./src ./src
# Copy built frontend assets
# Copy source code and built frontend assets
COPY Cargo.toml Cargo.lock ./
COPY build.rs ./
COPY .git* ./
COPY src ./src
COPY migrations ./migrations
COPY --from=frontend-builder /app/dist ./web/dist
# Build web app with embedded assets
RUN rm ./target/release/deps/banner*
RUN cargo build --release
RUN cargo build --release --bin banner
# Strip the binary to reduce size
RUN strip target/release/banner
# Runtime Stage - Debian slim for glibc compatibility
# --- Runtime Stage ---
FROM debian:12-slim
ARG APP=/usr/src/app
@@ -78,7 +90,7 @@ RUN addgroup --gid $GID $APP_USER \
&& mkdir -p ${APP}
# Copy application binary
COPY --from=builder --chown=$APP_USER:$APP_USER /usr/src/banner/target/release/banner ${APP}/banner
COPY --from=builder --chown=$APP_USER:$APP_USER /app/target/release/banner ${APP}/banner
# Set proper permissions
RUN chmod +x ${APP}/banner
@@ -101,4 +113,4 @@ ENV HOSTS=0.0.0.0,[::]
# Implicitly uses PORT environment variable
# temporary: running without 'scraper' service
CMD ["sh", "-c", "exec ./banner --services web,bot"]
CMD ["sh", "-c", "exec ./banner --services web,bot"]

View File

@@ -1,28 +1,79 @@
default_services := "bot,web,scraper"
default:
just --list
# Run all checks (format, clippy, tests, lint)
check:
cargo fmt --all -- --check
cargo clippy --all-features -- --deny warnings
cargo nextest run
bun run --cwd web typecheck
bun run --cwd web lint
# Format all Rust and TypeScript code
format:
cargo fmt --all
bun run --cwd web format
# Check formatting without modifying (CI-friendly)
format-check:
cargo fmt --all -- --check
bun run --cwd web format:check
# Start PostgreSQL in Docker and update .env with connection string
db:
#!/usr/bin/env bash
set -euo pipefail
# Find available port
PORT=$(shuf -i 49152-65535 -n 1)
while ss -tlnp 2>/dev/null | grep -q ":$PORT "; do
PORT=$(shuf -i 49152-65535 -n 1)
done
# Start PostgreSQL container
docker run -d \
--name banner-postgres \
-e POSTGRES_PASSWORD=banner \
-e POSTGRES_USER=banner \
-e POSTGRES_DB=banner \
-p "$PORT:5432" \
postgres:17-alpine
# Update .env file
DB_URL="postgresql://banner:banner@localhost:$PORT/banner"
if [ -f .env ]; then
sed -i.bak "s|^DATABASE_URL=.*|DATABASE_URL=$DB_URL|" .env
else
echo "DATABASE_URL=$DB_URL" > .env
fi
echo "PostgreSQL started on port $PORT"
echo "DATABASE_URL=$DB_URL"
echo "Run: sqlx migrate run"
# Auto-reloading frontend server
frontend:
pnpm run -C web dev
bun run --cwd web dev
# Production build of frontend
build-frontend:
pnpm run -C web build
bun run --cwd web build
# Auto-reloading backend server
backend services=default_services:
bacon --headless run -- -- --services "{{services}}"
backend *ARGS:
bacon --headless run -- -- {{ARGS}}
# Production build
build:
pnpm run -C web build
bun run --cwd web build
cargo build --release --bin banner
# Run auto-reloading development build with release characteristics (frontend is embedded, non-auto-reloading)
# This is useful for testing backend release-mode details.
dev-build services=default_services: build-frontend
bacon --headless run -- --profile dev-release -- --services "{{services}}" --tracing pretty
# Run auto-reloading development build with release characteristics
dev-build *ARGS='--services web --tracing pretty': build-frontend
bacon --headless run -- --profile dev-release -- {{ARGS}}
# Auto-reloading development build for both frontend and backend
# Will not notice if either the frontend/backend crashes, but will generally be resistant to stopping on their own.
[parallel]
dev services=default_services: frontend (backend services)
dev *ARGS='--services web,bot': frontend (backend ARGS)

View File

@@ -26,11 +26,11 @@ The application consists of three modular services that can be run independently
## Quick Start
```bash
pnpm install -C web # Install frontend dependencies
bun install --cwd web # Install frontend dependencies
cargo build # Build the backend
just dev # Runs auto-reloading dev build
just dev bot,web # Runs auto-reloading dev build, running only the bot and web services
just dev --services bot,web # Runs auto-reloading dev build, running only the bot and web services
just dev-build # Development build with release characteristics (frontend is embedded, non-auto-reloading)
just build # Production build that embeds assets

36
build.rs Normal file
View File

@@ -0,0 +1,36 @@
use std::process::Command;
fn main() {
// Try to get Git commit hash from Railway environment variable first
let git_hash = std::env::var("RAILWAY_GIT_COMMIT_SHA").unwrap_or_else(|_| {
// Fallback to git command if not on Railway
let output = Command::new("git").args(["rev-parse", "HEAD"]).output();
match output {
Ok(output) => {
if output.status.success() {
String::from_utf8_lossy(&output.stdout).trim().to_string()
} else {
"unknown".to_string()
}
}
Err(_) => "unknown".to_string(),
}
});
// Get the short hash (first 7 characters)
let short_hash = if git_hash != "unknown" && git_hash.len() >= 7 {
git_hash[..7].to_string()
} else {
git_hash.clone()
};
// Set the environment variables that will be available at compile time
println!("cargo:rustc-env=GIT_COMMIT_HASH={}", git_hash);
println!("cargo:rustc-env=GIT_COMMIT_SHORT={}", short_hash);
// Rebuild if the Git commit changes (only works when .git directory is available)
if std::path::Path::new(".git/HEAD").exists() {
println!("cargo:rerun-if-changed=.git/HEAD");
println!("cargo:rerun-if-changed=.git/refs/heads");
}
}

View File

@@ -0,0 +1,3 @@
-- Add retry tracking columns to scrape_jobs table
ALTER TABLE scrape_jobs ADD COLUMN retry_count INTEGER NOT NULL DEFAULT 0 CHECK (retry_count >= 0);
ALTER TABLE scrape_jobs ADD COLUMN max_retries INTEGER NOT NULL DEFAULT 5 CHECK (max_retries >= 0);

View File

@@ -0,0 +1,45 @@
-- Performance optimization indexes
-- Index for term-based queries (most common access pattern)
CREATE INDEX IF NOT EXISTS idx_courses_term_code ON courses(term_code);
-- Index for subject-based filtering
CREATE INDEX IF NOT EXISTS idx_courses_subject ON courses(subject);
-- Composite index for subject + term queries
CREATE INDEX IF NOT EXISTS idx_courses_subject_term ON courses(subject, term_code);
-- Index for course number lookups
CREATE INDEX IF NOT EXISTS idx_courses_course_number ON courses(course_number);
-- Index for last scraped timestamp (useful for finding stale data)
CREATE INDEX IF NOT EXISTS idx_courses_last_scraped ON courses(last_scraped_at);
-- Index for course metrics time-series queries
-- BRIN index is optimal for time-series data
CREATE INDEX IF NOT EXISTS idx_course_metrics_timestamp ON course_metrics USING BRIN(timestamp);
-- B-tree index for specific course metric lookups
CREATE INDEX IF NOT EXISTS idx_course_metrics_course_timestamp
ON course_metrics(course_id, timestamp DESC);
-- Partial index for pending scrape jobs (only unlocked jobs)
CREATE INDEX IF NOT EXISTS idx_scrape_jobs_pending
ON scrape_jobs(execute_at ASC)
WHERE locked_at IS NULL;
-- Index for high-priority job processing
CREATE INDEX IF NOT EXISTS idx_scrape_jobs_priority_pending
ON scrape_jobs(priority DESC, execute_at ASC)
WHERE locked_at IS NULL;
-- Index for retry tracking
CREATE INDEX IF NOT EXISTS idx_scrape_jobs_retry_count
ON scrape_jobs(retry_count)
WHERE retry_count > 0 AND locked_at IS NULL;
-- Analyze tables to update statistics
ANALYZE courses;
ANALYZE course_metrics;
ANALYZE course_audits;
ANALYZE scrape_jobs;

View File

@@ -0,0 +1,53 @@
-- Index Optimization Follow-up Migration
-- Reason: Redundant with composite index idx_courses_subject_term
DROP INDEX IF EXISTS idx_courses_subject;
-- Remove: idx_scrape_jobs_retry_count
DROP INDEX IF EXISTS idx_scrape_jobs_retry_count;
-- Purpose: Optimize the scheduler's frequent query (runs every 60 seconds)
CREATE INDEX IF NOT EXISTS idx_scrape_jobs_scheduler_lookup
ON scrape_jobs(target_type, target_payload)
WHERE locked_at IS NULL;
-- Note: We use (target_type, target_payload) instead of including locked_at
-- in the index columns because:
-- 1. The WHERE clause filters locked_at IS NULL (partial index optimization)
-- 2. target_payload is JSONB and already large; keeping it as an indexed column
-- allows PostgreSQL to use index-only scans for the SELECT target_payload query
-- 3. This design minimizes index size while maximizing query performance
-- Purpose: Enable efficient audit trail queries by course
CREATE INDEX IF NOT EXISTS idx_course_audits_course_timestamp
ON course_audits(course_id, timestamp DESC);
-- Purpose: Enable queries like "Show all changes in the last 24 hours"
CREATE INDEX IF NOT EXISTS idx_course_audits_timestamp
ON course_audits(timestamp DESC);
-- The BRIN index on course_metrics(timestamp) assumes data is inserted in
-- chronological order. BRIN indexes are only effective when data is physically
-- ordered on disk. If you perform:
-- - Backfills of historical data
-- - Out-of-order inserts
-- - Frequent UPDATEs that move rows
--
-- Then the BRIN index effectiveness will degrade. Monitor with:
-- SELECT * FROM brin_page_items(get_raw_page('idx_course_metrics_timestamp', 1));
--
-- If you see poor selectivity, consider:
-- 1. REINDEX to rebuild after bulk loads
-- 2. Switch to B-tree if inserts are not time-ordered
-- 3. Use CLUSTER to physically reorder the table (requires downtime)
COMMENT ON INDEX idx_course_metrics_timestamp IS
'BRIN index - requires chronologically ordered inserts for efficiency. Monitor selectivity.';
-- Update statistics for query planner
ANALYZE courses;
ANALYZE course_metrics;
ANALYZE course_audits;
ANALYZE scrape_jobs;

168
src/app.rs Normal file
View File

@@ -0,0 +1,168 @@
use crate::banner::BannerApi;
use crate::cli::ServiceName;
use crate::config::Config;
use crate::scraper::ScraperService;
use crate::services::bot::BotService;
use crate::services::manager::ServiceManager;
use crate::services::web::WebService;
use crate::state::AppState;
use crate::web::routes::BannerState;
use figment::value::UncasedStr;
use figment::{Figment, providers::Env};
use sqlx::postgres::PgPoolOptions;
use std::process::ExitCode;
use std::sync::Arc;
use std::time::Duration;
use tracing::{error, info};
/// Main application struct containing all necessary components
pub struct App {
config: Config,
db_pool: sqlx::PgPool,
banner_api: Arc<BannerApi>,
app_state: AppState,
banner_state: BannerState,
service_manager: ServiceManager,
}
impl App {
/// Create a new App instance with all necessary components initialized
pub async fn new() -> Result<Self, anyhow::Error> {
// Load configuration
let config: Config = Figment::new()
.merge(Env::raw().map(|k| {
if k == UncasedStr::new("RAILWAY_DEPLOYMENT_DRAINING_SECONDS") {
"SHUTDOWN_TIMEOUT".into()
} else {
k.into()
}
}))
.extract()
.expect("Failed to load config");
// Check if the database URL is via private networking
let is_private = config.database_url.contains("railway.internal");
let slow_threshold = Duration::from_millis(if is_private { 200 } else { 500 });
// Create database connection pool
let db_pool = PgPoolOptions::new()
.min_connections(0)
.max_connections(4)
.acquire_slow_threshold(slow_threshold)
.acquire_timeout(Duration::from_secs(4))
.idle_timeout(Duration::from_secs(60 * 2))
.max_lifetime(Duration::from_secs(60 * 30))
.connect(&config.database_url)
.await
.expect("Failed to create database pool");
info!(
is_private = is_private,
slow_threshold = format!("{:.2?}", slow_threshold),
"database pool established"
);
// Run database migrations
info!("Running database migrations...");
sqlx::migrate!("./migrations")
.run(&db_pool)
.await
.expect("Failed to run database migrations");
info!("Database migrations completed successfully");
// Create BannerApi and AppState
let banner_api = BannerApi::new_with_config(
config.banner_base_url.clone(),
config.rate_limiting.clone().into(),
)
.expect("Failed to create BannerApi");
let banner_api_arc = Arc::new(banner_api);
let app_state = AppState::new(banner_api_arc.clone(), db_pool.clone());
// Create BannerState for web service
let banner_state = BannerState {};
Ok(App {
config,
db_pool,
banner_api: banner_api_arc,
app_state,
banner_state,
service_manager: ServiceManager::new(),
})
}
/// Setup and register services based on enabled service list
pub fn setup_services(&mut self, services: &[ServiceName]) -> Result<(), anyhow::Error> {
// Register enabled services with the manager
if services.contains(&ServiceName::Web) {
let web_service =
Box::new(WebService::new(self.config.port, self.banner_state.clone()));
self.service_manager
.register_service(ServiceName::Web.as_str(), web_service);
}
if services.contains(&ServiceName::Scraper) {
let scraper_service = Box::new(ScraperService::new(
self.db_pool.clone(),
self.banner_api.clone(),
));
self.service_manager
.register_service(ServiceName::Scraper.as_str(), scraper_service);
}
// Check if any services are enabled
if !self.service_manager.has_services() && !services.contains(&ServiceName::Bot) {
error!("No services enabled. Cannot start application.");
return Err(anyhow::anyhow!("No services enabled"));
}
Ok(())
}
/// Setup bot service if enabled
pub async fn setup_bot_service(&mut self) -> Result<(), anyhow::Error> {
use std::sync::Arc;
use tokio::sync::{Mutex, broadcast};
// Create shutdown channel for status update task
let (status_shutdown_tx, status_shutdown_rx) = broadcast::channel(1);
let status_task_handle = Arc::new(Mutex::new(None));
let client = BotService::create_client(
&self.config,
self.app_state.clone(),
status_task_handle.clone(),
status_shutdown_rx,
)
.await
.expect("Failed to create Discord client");
let bot_service = Box::new(BotService::new(
client,
status_task_handle,
status_shutdown_tx,
));
self.service_manager
.register_service(ServiceName::Bot.as_str(), bot_service);
Ok(())
}
/// Start all registered services
pub fn start_services(&mut self) {
self.service_manager.spawn_all();
}
/// Run the application and handle shutdown signals
pub async fn run(self) -> ExitCode {
use crate::signals::handle_shutdown_signals;
handle_shutdown_signals(self.service_manager, self.config.shutdown_timeout).await
}
/// Get a reference to the configuration
pub fn config(&self) -> &Config {
&self.config
}
}

View File

@@ -7,7 +7,7 @@ use std::{
};
use crate::banner::{
BannerSession, SessionPool, create_shared_rate_limiter_with_config,
BannerSession, SessionPool, create_shared_rate_limiter,
errors::BannerApiError,
json::parse_json_with_context,
middleware::TransparentMiddleware,
@@ -15,7 +15,7 @@ use crate::banner::{
nonce,
query::SearchQuery,
rate_limit_middleware::RateLimitMiddleware,
rate_limiter::{RateLimitConfig, SharedRateLimiter, create_shared_rate_limiter},
rate_limiter::{RateLimitConfig, SharedRateLimiter},
util::user_agent,
};
use anyhow::{Context, Result, anyhow};
@@ -35,6 +35,7 @@ pub struct BannerApi {
base_url: String,
}
#[allow(dead_code)]
impl BannerApi {
/// Creates a new Banner API client.
pub fn new(base_url: String) -> Result<Self> {
@@ -43,7 +44,7 @@ impl BannerApi {
/// Creates a new Banner API client with custom rate limiting configuration.
pub fn new_with_config(base_url: String, rate_limit_config: RateLimitConfig) -> Result<Self> {
let rate_limiter = create_shared_rate_limiter_with_config(rate_limit_config);
let rate_limiter = create_shared_rate_limiter(Some(rate_limit_config));
let http = ClientBuilder::new(
Client::builder()
@@ -151,6 +152,13 @@ impl BannerApi {
}
/// Performs a course search and handles common response processing.
#[tracing::instrument(
skip(self, query),
fields(
term = %term,
subject = %query.get_subject().unwrap_or(&"all".to_string())
)
)]
async fn perform_search(
&self,
term: &str,
@@ -317,12 +325,6 @@ impl BannerApi {
sort: &str,
sort_descending: bool,
) -> Result<SearchResult, BannerApiError> {
debug!(
term = term,
subject = query.get_subject().map(|s| s.as_str()).unwrap_or("all"),
max_results = query.get_max_results(),
"Starting course search"
);
self.perform_search(term, query, sort, sort_descending)
.await
}

View File

@@ -1,10 +1,14 @@
//! JSON parsing utilities for the Banner API client.
use anyhow::Result;
use serde_json;
use serde_json::{self, Value};
/// Attempt to parse JSON and, on failure, include a contextual snippet of the
/// line where the error occurred. This prevents dumping huge JSON bodies to logs.
/// line where the error occurred.
///
/// In debug builds, this provides detailed context including the full JSON object
/// containing the error and type mismatch information. In release builds, it shows
/// a minimal snippet to prevent dumping huge JSON bodies to production logs.
pub fn parse_json_with_context<T: serde::de::DeserializeOwned>(body: &str) -> Result<T> {
let jd = &mut serde_json::Deserializer::from_str(body);
match serde_path_to_error::deserialize(jd) {
@@ -12,27 +16,247 @@ pub fn parse_json_with_context<T: serde::de::DeserializeOwned>(body: &str) -> Re
Err(err) => {
let inner_err = err.inner();
let (line, column) = (inner_err.line(), inner_err.column());
let snippet = build_error_snippet(body, line, column, 20);
let path = err.path().to_string();
let msg = inner_err.to_string();
let loc = format!(" at line {line} column {column}");
let msg_without_loc = msg.strip_suffix(&loc).unwrap_or(&msg).to_string();
let mut final_err = String::new();
if !path.is_empty() && path != "." {
final_err.push_str(&format!("for path '{}' ", path));
}
final_err.push_str(&format!(
"({msg_without_loc}) at line {line} column {column}"
));
final_err.push_str(&format!("\n{snippet}"));
// Build error message differently for debug vs release builds
let final_err = if cfg!(debug_assertions) {
// Debug mode: provide detailed context
let type_info = parse_type_mismatch(&msg_without_loc);
let context = extract_json_object_at_path(body, err.path(), line, column);
let mut err_msg = String::new();
if !path.is_empty() && path != "." {
err_msg.push_str(&format!("for path '{}'\n", path));
}
err_msg.push_str(&format!(
"({}) at line {} column {}\n\n",
type_info, line, column
));
err_msg.push_str(&context);
err_msg
} else {
// Release mode: minimal snippet to keep logs concise
let snippet = build_error_snippet(body, line, column, 20);
let mut err_msg = String::new();
if !path.is_empty() && path != "." {
err_msg.push_str(&format!("for path '{}' ", path));
}
err_msg.push_str(&format!(
"({}) at line {} column {}",
msg_without_loc, line, column
));
err_msg.push_str(&format!("\n{}", snippet));
err_msg
};
Err(anyhow::anyhow!(final_err))
}
}
}
/// Extract type mismatch information from a serde error message.
///
/// Parses error messages like "invalid type: null, expected a string" to extract
/// the expected and actual types for clearer error reporting.
///
/// Returns a formatted string like "(expected a string, got null)" or the original
/// message if parsing fails.
fn parse_type_mismatch(error_msg: &str) -> String {
// Try to parse "invalid type: X, expected Y" format
if let Some(invalid_start) = error_msg.find("invalid type: ") {
let after_prefix = &error_msg[invalid_start + "invalid type: ".len()..];
if let Some(comma_pos) = after_prefix.find(", expected ") {
let actual_type = &after_prefix[..comma_pos];
let expected_part = &after_prefix[comma_pos + ", expected ".len()..];
// Clean up expected part (remove " at line X column Y" if present)
let expected_type = expected_part
.split(" at line ")
.next()
.unwrap_or(expected_part)
.trim();
return format!("expected {}, got {}", expected_type, actual_type);
}
}
// Try to parse "expected X at line Y" format
if error_msg.starts_with("expected ")
&& let Some(expected_part) = error_msg.split(" at line ").next()
{
return expected_part.to_string();
}
// Fallback: return original message without location info
error_msg.to_string()
}
/// Extract and pretty-print the JSON object/array containing the parse error.
///
/// This function navigates to the error location using the serde path and extracts
/// the parent object or array to provide better context for debugging.
///
/// # Arguments
/// * `body` - The raw JSON string
/// * `path` - The serde path to the error (e.g., "data[0].faculty[0].displayName")
/// * `line` - Line number of the error (for fallback)
/// * `column` - Column number of the error (for fallback)
///
/// # Returns
/// A formatted string containing the JSON object with the error, or a fallback snippet
fn extract_json_object_at_path(
body: &str,
path: &serde_path_to_error::Path,
line: usize,
column: usize,
) -> String {
// Try to parse the entire JSON structure
let root_value: Value = match serde_json::from_str(body) {
Ok(v) => v,
Err(_) => {
// If we can't parse the JSON at all, fall back to line snippet
return build_error_snippet(body, line, column, 20);
}
};
// Navigate to the error location using the path
let path_str = path.to_string();
let segments = parse_path_segments(&path_str);
let (context_value, context_name) = navigate_to_context(&root_value, &segments);
// Pretty-print the context value with limited depth to avoid huge output
match serde_json::to_string_pretty(&context_value) {
Ok(pretty) => {
// Limit output to ~50 lines to prevent log spam
let lines: Vec<&str> = pretty.lines().collect();
let truncated = if lines.len() > 50 {
let mut result = lines[..47].join("\n");
result.push_str("\n ... (truncated, ");
result.push_str(&(lines.len() - 47).to_string());
result.push_str(" more lines)");
result
} else {
pretty
};
format!("{} at '{}':\n{}", context_name, path_str, truncated)
}
Err(_) => {
// Fallback to simple snippet if pretty-print fails
build_error_snippet(body, line, column, 20)
}
}
}
/// Parse a JSON path string into segments for navigation.
///
/// Converts paths like "data[0].faculty[1].displayName" into a sequence of
/// object keys and array indices.
fn parse_path_segments(path: &str) -> Vec<PathSegment> {
let mut segments = Vec::new();
let mut current = String::new();
let mut in_bracket = false;
for ch in path.chars() {
match ch {
'.' if !in_bracket => {
if !current.is_empty() {
segments.push(PathSegment::Key(current.clone()));
current.clear();
}
}
'[' => {
if !current.is_empty() {
segments.push(PathSegment::Key(current.clone()));
current.clear();
}
in_bracket = true;
}
']' => {
if in_bracket && !current.is_empty() {
if let Ok(index) = current.parse::<usize>() {
segments.push(PathSegment::Index(index));
}
current.clear();
}
in_bracket = false;
}
_ => current.push(ch),
}
}
if !current.is_empty() {
segments.push(PathSegment::Key(current));
}
segments
}
/// Represents a segment in a JSON path (either an object key or array index).
#[derive(Debug)]
enum PathSegment {
Key(String),
Index(usize),
}
/// Navigate through a JSON value using path segments and return the appropriate context.
///
/// This function walks the JSON structure and returns the parent object/array that
/// contains the error, providing meaningful context for debugging.
///
/// # Returns
/// A tuple of (context_value, description) where context_value is the JSON to display
/// and description is a human-readable name for what we're showing.
fn navigate_to_context<'a>(
mut current: &'a Value,
segments: &[PathSegment],
) -> (&'a Value, &'static str) {
// If path is empty or just root, return the whole value
if segments.is_empty() {
return (current, "Root object");
}
// Try to navigate to the parent of the error location
// We want to show the containing object/array, not just the failing field
let parent_depth = segments.len().saturating_sub(1);
for (i, segment) in segments.iter().enumerate() {
// Stop one level before the end to show the parent context
if i >= parent_depth {
break;
}
match segment {
PathSegment::Key(key) => {
if let Some(next) = current.get(key) {
current = next;
} else {
// Can't navigate further, return what we have
return (current, "Partial context (navigation stopped)");
}
}
PathSegment::Index(idx) => {
if let Some(next) = current.get(idx) {
current = next;
} else {
return (current, "Partial context (index out of bounds)");
}
}
}
}
(current, "Object containing error")
}
fn build_error_snippet(body: &str, line: usize, column: usize, context_len: usize) -> String {
let target_line = body.lines().nth(line.saturating_sub(1)).unwrap_or("");
if target_line.is_empty() {
@@ -53,3 +277,139 @@ fn build_error_snippet(body: &str, line: usize, column: usize, context_len: usiz
format!("...{slice}...\n {indicator}")
}
#[cfg(test)]
mod tests {
use super::*;
use serde::Deserialize;
#[test]
fn test_parse_type_mismatch_invalid_type() {
let msg = "invalid type: null, expected a string at line 45 column 29";
let result = parse_type_mismatch(msg);
assert_eq!(result, "expected a string, got null");
}
#[test]
fn test_parse_type_mismatch_expected() {
let msg = "expected value at line 1 column 1";
let result = parse_type_mismatch(msg);
assert_eq!(result, "expected value");
}
#[test]
fn test_parse_path_segments_simple() {
let segments = parse_path_segments("data.name");
assert_eq!(segments.len(), 2);
match &segments[0] {
PathSegment::Key(k) => assert_eq!(k, "data"),
_ => panic!("Expected Key segment"),
}
}
#[test]
fn test_parse_path_segments_with_array() {
let segments = parse_path_segments("data[0].faculty[1].displayName");
assert_eq!(segments.len(), 5);
match &segments[0] {
PathSegment::Key(k) => assert_eq!(k, "data"),
_ => panic!("Expected Key segment"),
}
match &segments[1] {
PathSegment::Index(i) => assert_eq!(*i, 0),
_ => panic!("Expected Index segment"),
}
}
#[test]
fn test_parse_json_with_context_null_value() {
#[derive(Debug, Deserialize)]
struct TestStruct {
name: String,
}
let json = r#"{"name": null}"#;
let result: Result<TestStruct> = parse_json_with_context(json);
assert!(result.is_err());
let err_msg = result.unwrap_err().to_string();
// Should contain path info
assert!(err_msg.contains("name"));
// In debug mode, should contain detailed context
if cfg!(debug_assertions) {
assert!(err_msg.contains("expected"));
}
}
#[test]
fn test_navigate_to_context() {
let json = r#"{"data": [{"faculty": [{"name": "John"}]}]}"#;
let value: Value = serde_json::from_str(json).unwrap();
let segments = parse_path_segments("data[0].faculty[0].name");
let (context, _) = navigate_to_context(&value, &segments);
// Should return the faculty[0] object (parent of 'name')
assert!(context.is_object());
assert!(context.get("name").is_some());
}
#[test]
fn test_realistic_banner_error() {
#[derive(Debug, Deserialize)]
struct Course {
#[allow(dead_code)]
#[serde(rename = "courseTitle")]
course_title: String,
faculty: Vec<Faculty>,
}
#[derive(Debug, Deserialize)]
struct Faculty {
#[serde(rename = "displayName")]
display_name: String,
#[allow(dead_code)]
email: String,
}
#[derive(Debug, Deserialize)]
struct SearchResult {
data: Vec<Course>,
}
// Simulate Banner API response with null faculty displayName
// This mimics the actual error from SPN subject scrape
let json = r#"{
"data": [
{
"courseTitle": "Spanish Conversation",
"faculty": [
{
"displayName": null,
"email": "instructor@utsa.edu"
}
]
}
]
}"#;
let result: Result<SearchResult> = parse_json_with_context(json);
assert!(result.is_err());
let err_msg = result.unwrap_err().to_string();
println!("\n=== Error output in debug mode ===\n{}\n", err_msg);
// Verify error contains key information
assert!(err_msg.contains("data[0].faculty[0].displayName"));
// In debug mode, should show detailed context
if cfg!(debug_assertions) {
// Should show type mismatch info
assert!(err_msg.contains("expected") && err_msg.contains("got"));
// Should show surrounding JSON context with the faculty object
assert!(err_msg.contains("email"));
}
}
}

View File

@@ -3,10 +3,13 @@
use http::Extensions;
use reqwest::{Request, Response};
use reqwest_middleware::{Middleware, Next};
use tracing::{trace, warn};
use tracing::{debug, trace, warn};
pub struct TransparentMiddleware;
/// Threshold for logging slow requests at DEBUG level (in milliseconds)
const SLOW_REQUEST_THRESHOLD_MS: u128 = 1000;
#[async_trait::async_trait]
impl Middleware for TransparentMiddleware {
async fn handle(
@@ -15,33 +18,56 @@ impl Middleware for TransparentMiddleware {
extensions: &mut Extensions,
next: Next<'_>,
) -> std::result::Result<Response, reqwest_middleware::Error> {
trace!(
domain = req.url().domain(),
headers = ?req.headers(),
"{method} {path}",
method = req.method().to_string(),
path = req.url().path(),
);
let method = req.method().to_string();
let path = req.url().path().to_string();
let start = std::time::Instant::now();
let response_result = next.run(req, extensions).await;
let duration = start.elapsed();
match response_result {
Ok(response) => {
if response.status().is_success() {
trace!(
"{code} {reason} {path}",
code = response.status().as_u16(),
reason = response.status().canonical_reason().unwrap_or("??"),
path = response.url().path(),
);
let duration_ms = duration.as_millis();
if duration_ms >= SLOW_REQUEST_THRESHOLD_MS {
debug!(
method = method,
path = path,
status = response.status().as_u16(),
duration_ms = duration_ms,
"Request completed (slow)"
);
} else {
trace!(
method = method,
path = path,
status = response.status().as_u16(),
duration_ms = duration_ms,
"Request completed"
);
}
Ok(response)
} else {
let e = response.error_for_status_ref().unwrap_err();
warn!(error = ?e, "Request failed (server)");
warn!(
method = method,
path = path,
error = ?e,
status = response.status().as_u16(),
duration_ms = duration.as_millis(),
"Request failed"
);
Ok(response)
}
}
Err(error) => {
warn!(error = ?error, "Request failed (middleware)");
warn!(
method = method,
path = path,
error = ?error,
duration_ms = duration.as_millis(),
"Request failed"
);
Err(error)
}
}

View File

@@ -258,6 +258,7 @@ impl TimeRange {
}
/// Get duration in minutes
#[allow(dead_code)]
pub fn duration_minutes(&self) -> i64 {
let start_minutes = self.start.hour() as i64 * 60 + self.start.minute() as i64;
let end_minutes = self.end.hour() as i64 * 60 + self.end.minute() as i64;
@@ -302,6 +303,7 @@ impl DateRange {
}
/// Check if a specific date falls within this range
#[allow(dead_code)]
pub fn contains_date(&self, date: NaiveDate) -> bool {
date >= self.start && date <= self.end
}

View File

@@ -147,11 +147,6 @@ impl Term {
},
}
}
/// Returns a long string representation of the term (e.g., "Fall 2025")
pub fn to_long_string(&self) -> String {
format!("{} {}", self.season, self.year)
}
}
impl TermPoint {

View File

@@ -32,6 +32,7 @@ pub struct SearchQuery {
course_number_range: Option<Range>,
}
#[allow(dead_code)]
impl SearchQuery {
/// Creates a new SearchQuery with default values
pub fn new() -> Self {

View File

@@ -4,7 +4,7 @@ use crate::banner::rate_limiter::{RequestType, SharedRateLimiter};
use http::Extensions;
use reqwest::{Request, Response};
use reqwest_middleware::{Middleware, Next};
use tracing::{debug, trace, warn};
use tracing::debug;
use url::Url;
/// Middleware that enforces rate limiting based on request URL patterns
@@ -18,6 +18,16 @@ impl RateLimitMiddleware {
Self { rate_limiter }
}
/// Returns a human-readable description of the rate limit for a request type
fn get_rate_limit_description(request_type: RequestType) -> &'static str {
match request_type {
RequestType::Session => "6 rpm (~10s interval)",
RequestType::Search => "30 rpm (~2s interval)",
RequestType::Metadata => "20 rpm (~3s interval)",
RequestType::Reset => "10 rpm (~6s interval)",
}
}
/// Determines the request type based on the URL path
fn get_request_type(url: &Url) -> RequestType {
let path = url.path();
@@ -53,49 +63,22 @@ impl Middleware for RateLimitMiddleware {
) -> std::result::Result<Response, reqwest_middleware::Error> {
let request_type = Self::get_request_type(req.url());
trace!(
url = %req.url(),
request_type = ?request_type,
"Rate limiting request"
);
// Wait for permission to make the request
let start = std::time::Instant::now();
self.rate_limiter.wait_for_permission(request_type).await;
let wait_duration = start.elapsed();
trace!(
url = %req.url(),
request_type = ?request_type,
"Rate limit permission granted, making request"
);
// Only log if rate limiting caused significant delay (>= 500ms)
if wait_duration.as_millis() >= 500 {
let limit_desc = Self::get_rate_limit_description(request_type);
debug!(
request_type = ?request_type,
wait_ms = wait_duration.as_millis(),
rate_limit = limit_desc,
"Rate limit caused delay"
);
}
// Make the actual request
let response_result = next.run(req, extensions).await;
match response_result {
Ok(response) => {
if response.status().is_success() {
trace!(
url = %response.url(),
status = response.status().as_u16(),
"Request completed successfully"
);
} else {
warn!(
url = %response.url(),
status = response.status().as_u16(),
"Request completed with error status"
);
}
Ok(response)
}
Err(error) => {
warn!(
url = ?error.url(),
error = ?error,
"Request failed"
);
Err(error)
}
}
next.run(req, extensions).await
}
}

View File

@@ -8,7 +8,6 @@ use governor::{
use std::num::NonZeroU32;
use std::sync::Arc;
use std::time::Duration;
use tracing::{debug, trace, warn};
/// Different types of Banner API requests with different rate limits
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
@@ -61,7 +60,6 @@ pub struct BannerRateLimiter {
search_limiter: RateLimiter<NotKeyed, InMemoryState, DefaultClock>,
metadata_limiter: RateLimiter<NotKeyed, InMemoryState, DefaultClock>,
reset_limiter: RateLimiter<NotKeyed, InMemoryState, DefaultClock>,
config: RateLimitConfig,
}
impl BannerRateLimiter {
@@ -88,7 +86,6 @@ impl BannerRateLimiter {
search_limiter: RateLimiter::direct(search_quota),
metadata_limiter: RateLimiter::direct(metadata_quota),
reset_limiter: RateLimiter::direct(reset_quota),
config,
}
}
@@ -101,38 +98,8 @@ impl BannerRateLimiter {
RequestType::Reset => &self.reset_limiter,
};
trace!(request_type = ?request_type, "Waiting for rate limit permission");
// Wait until we can make the request
// Wait until we can make the request (logging handled by middleware)
limiter.until_ready().await;
trace!(request_type = ?request_type, "Rate limit permission granted");
}
/// Checks if a request of the given type would be allowed immediately
pub fn check_permission(&self, request_type: RequestType) -> bool {
let limiter = match request_type {
RequestType::Session => &self.session_limiter,
RequestType::Search => &self.search_limiter,
RequestType::Metadata => &self.metadata_limiter,
RequestType::Reset => &self.reset_limiter,
};
limiter.check().is_ok()
}
/// Gets the current configuration
pub fn config(&self) -> &RateLimitConfig {
&self.config
}
/// Updates the rate limit configuration
pub fn update_config(&mut self, config: RateLimitConfig) {
self.config = config;
// Note: In a production system, you'd want to recreate the limiters
// with the new configuration, but for simplicity we'll just update
// the config field here.
warn!("Rate limit configuration updated - restart required for full effect");
}
}
@@ -145,14 +112,9 @@ impl Default for BannerRateLimiter {
/// A shared rate limiter instance
pub type SharedRateLimiter = Arc<BannerRateLimiter>;
/// Creates a new shared rate limiter with default configuration
pub fn create_shared_rate_limiter() -> SharedRateLimiter {
Arc::new(BannerRateLimiter::default())
}
/// Creates a new shared rate limiter with custom configuration
pub fn create_shared_rate_limiter_with_config(config: RateLimitConfig) -> SharedRateLimiter {
Arc::new(BannerRateLimiter::new(config))
pub fn create_shared_rate_limiter(config: Option<RateLimitConfig>) -> SharedRateLimiter {
Arc::new(BannerRateLimiter::new(config.unwrap_or_default()))
}
/// Conversion from config module's RateLimitingConfig to this module's RateLimitConfig

View File

@@ -82,7 +82,6 @@ impl BannerSession {
/// Updates the last activity timestamp
pub fn touch(&mut self) {
trace!(id = self.unique_session_id, "Session was used");
self.last_activity = Some(Instant::now());
}
@@ -162,7 +161,7 @@ impl TermPool {
async fn release(&self, session: BannerSession) {
let id = session.unique_session_id.clone();
if session.is_expired() {
trace!(id = id, "Session is now expired, dropping.");
debug!(id = id, "Session expired, dropping");
// Wake up a waiter, as it might need to create a new session
// if this was the last one.
self.notifier.notify_one();
@@ -171,10 +170,8 @@ impl TermPool {
let mut queue = self.sessions.lock().await;
queue.push_back(session);
let queue_size = queue.len();
drop(queue); // Release lock before notifying
trace!(id = id, queue_size, "Session returned to pool");
self.notifier.notify_one();
}
}
@@ -204,22 +201,21 @@ impl SessionPool {
.or_insert_with(|| Arc::new(TermPool::new()))
.clone();
let start = Instant::now();
let mut waited_for_creation = false;
loop {
// Fast path: Try to get an existing, non-expired session.
{
let mut queue = term_pool.sessions.lock().await;
if let Some(session) = queue.pop_front() {
if !session.is_expired() {
trace!(id = session.unique_session_id, "Reusing session from pool");
return Ok(PooledSession {
session: Some(session),
pool: Arc::clone(&term_pool),
});
} else {
trace!(
id = session.unique_session_id,
"Popped an expired session, discarding."
);
debug!(id = session.unique_session_id, "Discarded expired session");
}
}
} // MutexGuard is dropped, lock is released.
@@ -229,7 +225,10 @@ impl SessionPool {
if *is_creating_guard {
// Another task is already creating a session. Release the lock and wait.
drop(is_creating_guard);
trace!("Another task is creating a session, waiting for notification...");
if !waited_for_creation {
trace!("Waiting for another task to create session");
waited_for_creation = true;
}
term_pool.notifier.notified().await;
// Loop back to the top to try the fast path again.
continue;
@@ -240,12 +239,11 @@ impl SessionPool {
drop(is_creating_guard);
// Race: wait for a session to be returned OR for the rate limiter to allow a new one.
trace!("Pool empty, racing notifier vs rate limiter...");
trace!("Pool empty, creating new session");
tokio::select! {
_ = term_pool.notifier.notified() => {
// A session was returned while we were waiting!
// We are no longer the creator. Reset the flag and loop to race for the new session.
trace!("Notified that a session was returned. Looping to retry.");
let mut guard = term_pool.is_creating.lock().await;
*guard = false;
drop(guard);
@@ -253,7 +251,6 @@ impl SessionPool {
}
_ = SESSION_CREATION_RATE_LIMITER.until_ready() => {
// The rate limit has elapsed. It's our job to create the session.
trace!("Rate limiter ready. Proceeding to create a new session.");
let new_session_result = self.create_session(&term).await;
// After creation, we are no longer the creator. Reset the flag
@@ -265,7 +262,12 @@ impl SessionPool {
match new_session_result {
Ok(new_session) => {
debug!(id = new_session.unique_session_id, "Successfully created new session");
let elapsed = start.elapsed();
debug!(
id = new_session.unique_session_id,
elapsed_ms = elapsed.as_millis(),
"Created new session"
);
return Ok(PooledSession {
session: Some(new_session),
pool: term_pool,
@@ -298,8 +300,12 @@ impl SessionPool {
.get_all("Set-Cookie")
.iter()
.filter_map(|header_value| {
if let Ok(cookie) = Cookie::parse(header_value.to_str().unwrap()) {
Some((cookie.name().to_string(), cookie.value().to_string()))
if let Ok(cookie_str) = header_value.to_str() {
if let Ok(cookie) = Cookie::parse(cookie_str) {
Some((cookie.name().to_string(), cookie.value().to_string()))
} else {
None
}
} else {
None
}
@@ -310,16 +316,14 @@ impl SessionPool {
return Err(anyhow::anyhow!("Failed to get cookies"));
}
let jsessionid = cookies.get("JSESSIONID").unwrap();
let ssb_cookie = cookies.get("SSB_COOKIE").unwrap();
let jsessionid = cookies
.get("JSESSIONID")
.ok_or_else(|| anyhow::anyhow!("JSESSIONID cookie missing after validation"))?;
let ssb_cookie = cookies
.get("SSB_COOKIE")
.ok_or_else(|| anyhow::anyhow!("SSB_COOKIE cookie missing after validation"))?;
let cookie_header = format!("JSESSIONID={}; SSB_COOKIE={}", jsessionid, ssb_cookie);
trace!(
jsessionid = jsessionid,
ssb_cookie = ssb_cookie,
"New session cookies acquired"
);
self.http
.get(format!("{}/selfServiceMenu/data", self.base_url))
.header("Cookie", &cookie_header)
@@ -435,8 +439,23 @@ impl SessionPool {
let redirect: RedirectResponse = response.json().await?;
let base_url_path = self.base_url.parse::<Url>().unwrap().path().to_string();
let non_overlap_redirect = redirect.fwd_url.strip_prefix(&base_url_path).unwrap();
let base_url_path = self
.base_url
.parse::<Url>()
.context("Failed to parse base URL")?
.path()
.to_string();
let non_overlap_redirect =
redirect
.fwd_url
.strip_prefix(&base_url_path)
.ok_or_else(|| {
anyhow::anyhow!(
"Redirect URL '{}' does not start with expected prefix '{}'",
redirect.fwd_url,
base_url_path
)
})?;
// Follow the redirect
let redirect_url = format!("{}{}", self.base_url, non_overlap_redirect);
@@ -454,7 +473,6 @@ impl SessionPool {
));
}
trace!(term = term, "successfully selected term");
Ok(())
}
}

104
src/cli.rs Normal file
View File

@@ -0,0 +1,104 @@
use clap::Parser;
/// Banner Discord Bot - Course availability monitoring
///
/// This application runs multiple services that can be controlled via CLI arguments:
/// - bot: Discord bot for course monitoring commands
/// - web: HTTP server for web interface and API
/// - scraper: Background service for scraping course data
///
/// Use --services to specify which services to run, or --disable-services to exclude specific services.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
pub struct Args {
/// Log formatter to use
#[arg(long, value_enum, default_value_t = default_tracing_format())]
pub tracing: TracingFormat,
/// Services to run (comma-separated). Default: all services
///
/// Examples:
/// --services bot,web # Run only bot and web services
/// --services scraper # Run only the scraper service
#[arg(long, value_delimiter = ',', conflicts_with = "disable_services")]
pub services: Option<Vec<ServiceName>>,
/// Services to disable (comma-separated)
///
/// Examples:
/// --disable-services bot # Run web and scraper only
/// --disable-services bot,web # Run only the scraper service
#[arg(long, value_delimiter = ',', conflicts_with = "services")]
pub disable_services: Option<Vec<ServiceName>>,
}
#[derive(clap::ValueEnum, Clone, Debug)]
pub enum TracingFormat {
/// Use pretty formatter (default in debug mode)
Pretty,
/// Use JSON formatter (default in release mode)
Json,
}
#[derive(clap::ValueEnum, Clone, Debug, PartialEq)]
pub enum ServiceName {
/// Discord bot for course monitoring commands
Bot,
/// HTTP server for web interface and API
Web,
/// Background service for scraping course data
Scraper,
}
impl ServiceName {
/// Get all available services
pub fn all() -> Vec<ServiceName> {
vec![ServiceName::Bot, ServiceName::Web, ServiceName::Scraper]
}
/// Convert to string for service registration
pub fn as_str(&self) -> &'static str {
match self {
ServiceName::Bot => "bot",
ServiceName::Web => "web",
ServiceName::Scraper => "scraper",
}
}
}
/// Determine which services should be enabled based on CLI arguments
pub fn determine_enabled_services(args: &Args) -> Result<Vec<ServiceName>, anyhow::Error> {
match (&args.services, &args.disable_services) {
(Some(services), None) => {
// User specified which services to run
Ok(services.clone())
}
(None, Some(disabled)) => {
// User specified which services to disable
let enabled: Vec<ServiceName> = ServiceName::all()
.into_iter()
.filter(|s| !disabled.contains(s))
.collect();
Ok(enabled)
}
(None, None) => {
// Default: run all services
Ok(ServiceName::all())
}
(Some(_), Some(_)) => {
// This should be prevented by clap's conflicts_with, but just in case
Err(anyhow::anyhow!(
"Cannot specify both --services and --disable-services"
))
}
}
}
#[cfg(debug_assertions)]
const DEFAULT_TRACING_FORMAT: TracingFormat = TracingFormat::Pretty;
#[cfg(not(debug_assertions))]
const DEFAULT_TRACING_FORMAT: TracingFormat = TracingFormat::Json;
fn default_tracing_format() -> TracingFormat {
DEFAULT_TRACING_FORMAT
}

135
src/data/batch.rs Normal file
View File

@@ -0,0 +1,135 @@
//! Batch database operations for improved performance.
use crate::banner::Course;
use crate::error::Result;
use sqlx::PgPool;
use std::time::Instant;
use tracing::info;
/// Batch upsert courses in a single database query.
///
/// This function performs a bulk INSERT...ON CONFLICT DO UPDATE for all courses
/// in a single round-trip to the database, significantly reducing overhead compared
/// to individual inserts.
///
/// # Performance
/// - Reduces N database round-trips to 1
/// - Typical usage: 50-200 courses per batch
/// - PostgreSQL parameter limit: 65,535 (we use ~10 per course)
///
/// # Arguments
/// * `courses` - Slice of Course structs from the Banner API
/// * `db_pool` - PostgreSQL connection pool
///
/// # Returns
/// * `Ok(())` on success
/// * `Err(_)` if the database operation fails
///
/// # Example
/// ```no_run
/// use banner::data::batch::batch_upsert_courses;
/// use banner::banner::Course;
/// use sqlx::PgPool;
///
/// async fn example(courses: &[Course], pool: &PgPool) -> anyhow::Result<()> {
/// batch_upsert_courses(courses, pool).await?;
/// Ok(())
/// }
/// ```
pub async fn batch_upsert_courses(courses: &[Course], db_pool: &PgPool) -> Result<()> {
// Early return for empty batches
if courses.is_empty() {
info!("No courses to upsert, skipping batch operation");
return Ok(());
}
let start = Instant::now();
let course_count = courses.len();
// Extract course fields into vectors for UNNEST
let crns: Vec<&str> = courses
.iter()
.map(|c| c.course_reference_number.as_str())
.collect();
let subjects: Vec<&str> = courses.iter().map(|c| c.subject.as_str()).collect();
let course_numbers: Vec<&str> = courses.iter().map(|c| c.course_number.as_str()).collect();
let titles: Vec<&str> = courses.iter().map(|c| c.course_title.as_str()).collect();
let term_codes: Vec<&str> = courses.iter().map(|c| c.term.as_str()).collect();
let enrollments: Vec<i32> = courses.iter().map(|c| c.enrollment).collect();
let max_enrollments: Vec<i32> = courses.iter().map(|c| c.maximum_enrollment).collect();
let wait_counts: Vec<i32> = courses.iter().map(|c| c.wait_count).collect();
let wait_capacities: Vec<i32> = courses.iter().map(|c| c.wait_capacity).collect();
// Perform batch upsert using UNNEST for efficient bulk insertion
let result = sqlx::query(
r#"
INSERT INTO courses (
crn, subject, course_number, title, term_code,
enrollment, max_enrollment, wait_count, wait_capacity, last_scraped_at
)
SELECT * FROM UNNEST(
$1::text[], $2::text[], $3::text[], $4::text[], $5::text[],
$6::int4[], $7::int4[], $8::int4[], $9::int4[],
array_fill(NOW()::timestamptz, ARRAY[$10])
) AS t(
crn, subject, course_number, title, term_code,
enrollment, max_enrollment, wait_count, wait_capacity, last_scraped_at
)
ON CONFLICT (crn, term_code)
DO UPDATE SET
subject = EXCLUDED.subject,
course_number = EXCLUDED.course_number,
title = EXCLUDED.title,
enrollment = EXCLUDED.enrollment,
max_enrollment = EXCLUDED.max_enrollment,
wait_count = EXCLUDED.wait_count,
wait_capacity = EXCLUDED.wait_capacity,
last_scraped_at = EXCLUDED.last_scraped_at
"#,
)
.bind(&crns)
.bind(&subjects)
.bind(&course_numbers)
.bind(&titles)
.bind(&term_codes)
.bind(&enrollments)
.bind(&max_enrollments)
.bind(&wait_counts)
.bind(&wait_capacities)
.bind(course_count as i32)
.execute(db_pool)
.await
.map_err(|e| anyhow::anyhow!("Failed to batch upsert courses: {}", e))?;
let duration = start.elapsed();
info!(
courses_count = course_count,
rows_affected = result.rows_affected(),
duration_ms = duration.as_millis(),
"Batch upserted courses"
);
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_empty_batch_returns_ok() {
// This is a basic compile-time test
// Runtime tests would require sqlx::test macro and a test database
let courses: Vec<Course> = vec![];
assert_eq!(courses.len(), 0);
}
}

View File

@@ -1,3 +1,4 @@
//! Database models and schema.
pub mod batch;
pub mod models;

View File

@@ -3,6 +3,7 @@
use chrono::{DateTime, Utc};
use serde_json::Value;
#[allow(dead_code)]
#[derive(sqlx::FromRow, Debug, Clone)]
pub struct Course {
pub id: i32,
@@ -18,6 +19,7 @@ pub struct Course {
pub last_scraped_at: DateTime<Utc>,
}
#[allow(dead_code)]
#[derive(sqlx::FromRow, Debug, Clone)]
pub struct CourseMetric {
pub id: i32,
@@ -28,6 +30,7 @@ pub struct CourseMetric {
pub seats_available: i32,
}
#[allow(dead_code)]
#[derive(sqlx::FromRow, Debug, Clone)]
pub struct CourseAudit {
pub id: i32,
@@ -59,6 +62,7 @@ pub enum TargetType {
}
/// Represents a queryable job from the database.
#[allow(dead_code)]
#[derive(sqlx::FromRow, Debug, Clone)]
pub struct ScrapeJob {
pub id: i32,
@@ -68,4 +72,8 @@ pub struct ScrapeJob {
pub execute_at: DateTime<Utc>,
pub created_at: DateTime<Utc>,
pub locked_at: Option<DateTime<Utc>>,
/// Number of retry attempts for this job (non-negative, enforced by CHECK constraint)
pub retry_count: i32,
/// Maximum number of retry attempts allowed (non-negative, enforced by CHECK constraint)
pub max_retries: i32,
}

View File

@@ -10,6 +10,7 @@ use tracing::{Event, Level, Subscriber};
use tracing_subscriber::fmt::format::Writer;
use tracing_subscriber::fmt::{FmtContext, FormatEvent, FormatFields, FormattedFields};
use tracing_subscriber::registry::LookupSpan;
use yansi::Paint;
/// Cached format description for timestamps
/// Uses 3 subsecond digits on Emscripten, 5 otherwise for better performance
@@ -26,11 +27,6 @@ const TIMESTAMP_FORMAT: &[FormatItem<'static>] =
/// Re-implementation of the Full formatter with improved timestamp display.
pub struct CustomPrettyFormatter;
/// A custom JSON formatter that flattens fields to root level
///
/// Outputs logs in the format: { "message": "...", "level": "...", "customAttribute": "..." }
pub struct CustomJsonFormatter;
impl<S, N> FormatEvent<S, N> for CustomPrettyFormatter
where
S: Subscriber + for<'a> LookupSpan<'a>,
@@ -63,20 +59,20 @@ where
for span in scope.from_root() {
write_bold(&mut writer, span.metadata().name())?;
saw_any = true;
write_dimmed(&mut writer, ":")?;
let ext = span.extensions();
if let Some(fields) = &ext.get::<FormattedFields<N>>() {
if !fields.is_empty() {
write_bold(&mut writer, "{")?;
write!(writer, "{}", fields)?;
write_bold(&mut writer, "}")?;
}
}
if writer.has_ansi_escapes() {
write!(writer, "\x1b[2m:\x1b[0m")?;
} else {
writer.write_char(':')?;
if let Some(fields) = &ext.get::<FormattedFields<N>>()
&& !fields.fields.is_empty()
{
write_bold(&mut writer, "{")?;
writer.write_str(fields.fields.as_str())?;
write_bold(&mut writer, "}")?;
}
write_dimmed(&mut writer, ":")?;
}
if saw_any {
writer.write_char(' ')?;
}
@@ -84,7 +80,7 @@ where
// 4) Target (dimmed), then a space
if writer.has_ansi_escapes() {
write!(writer, "\x1b[2m{}\x1b[0m\x1b[2m:\x1b[0m ", meta.target())?;
write!(writer, "{}: ", Paint::new(meta.target()).dim())?;
} else {
write!(writer, "{}: ", meta.target())?;
}
@@ -97,6 +93,11 @@ where
}
}
/// A custom JSON formatter that flattens fields to root level
///
/// Outputs logs in the format: { "message": "...", "level": "...", "customAttribute": "..." }
pub struct CustomJsonFormatter;
impl<S, N> FormatEvent<S, N> for CustomJsonFormatter
where
S: Subscriber + for<'a> LookupSpan<'a>,
@@ -104,7 +105,7 @@ where
{
fn format_event(
&self,
_ctx: &FmtContext<'_, S, N>,
ctx: &FmtContext<'_, S, N>,
mut writer: Writer<'_>,
event: &Event<'_>,
) -> fmt::Result {
@@ -116,12 +117,15 @@ where
level: String,
target: String,
#[serde(flatten)]
spans: Map<String, Value>,
#[serde(flatten)]
fields: Map<String, Value>,
}
let (message, fields) = {
let (message, fields, spans) = {
let mut message: Option<String> = None;
let mut fields: Map<String, Value> = Map::new();
let mut spans: Map<String, Value> = Map::new();
struct FieldVisitor<'a> {
message: &'a mut Option<String>,
@@ -184,13 +188,42 @@ where
};
event.record(&mut visitor);
(message, fields)
// Collect span information from the span hierarchy
if let Some(scope) = ctx.event_scope() {
for span in scope.from_root() {
let span_name = span.metadata().name().to_string();
let mut span_fields: Map<String, Value> = Map::new();
// Try to extract fields from FormattedFields
let ext = span.extensions();
if let Some(formatted_fields) = ext.get::<FormattedFields<N>>() {
// Try to parse as JSON first
if let Ok(json_fields) = serde_json::from_str::<Map<String, Value>>(
formatted_fields.fields.as_str(),
) {
span_fields.extend(json_fields);
} else {
// If not valid JSON, treat the entire field string as a single field
span_fields.insert(
"raw".to_string(),
Value::String(formatted_fields.fields.as_str().to_string()),
);
}
}
// Insert span as a nested object directly into the spans map
spans.insert(span_name, Value::Object(span_fields));
}
}
(message, fields, spans)
};
let json = EventFields {
message: message.unwrap_or_default(),
level: meta.level().to_string(),
target: meta.target().to_string(),
spans,
fields,
};
@@ -205,15 +238,14 @@ where
/// Write the verbosity level with the same coloring/alignment as the Full formatter.
fn write_colored_level(writer: &mut Writer<'_>, level: &Level) -> fmt::Result {
if writer.has_ansi_escapes() {
// Basic ANSI color sequences; reset with \x1b[0m
let (color, text) = match *level {
Level::TRACE => ("\x1b[35m", "TRACE"), // purple
Level::DEBUG => ("\x1b[34m", "DEBUG"), // blue
Level::INFO => ("\x1b[32m", " INFO"), // green, note leading space
Level::WARN => ("\x1b[33m", " WARN"), // yellow, note leading space
Level::ERROR => ("\x1b[31m", "ERROR"), // red
let paint = match *level {
Level::TRACE => Paint::new("TRACE").magenta(),
Level::DEBUG => Paint::new("DEBUG").blue(),
Level::INFO => Paint::new(" INFO").green(),
Level::WARN => Paint::new(" WARN").yellow(),
Level::ERROR => Paint::new("ERROR").red(),
};
write!(writer, "{}{}\x1b[0m", color, text)
write!(writer, "{}", paint)
} else {
// Right-pad to width 5 like Full's non-ANSI mode
match *level {
@@ -228,7 +260,7 @@ fn write_colored_level(writer: &mut Writer<'_>, level: &Level) -> fmt::Result {
fn write_dimmed(writer: &mut Writer<'_>, s: impl fmt::Display) -> fmt::Result {
if writer.has_ansi_escapes() {
write!(writer, "\x1b[2m{}\x1b[0m", s)
write!(writer, "{}", Paint::new(s).dim())
} else {
write!(writer, "{}", s)
}
@@ -236,7 +268,7 @@ fn write_dimmed(writer: &mut Writer<'_>, s: impl fmt::Display) -> fmt::Result {
fn write_bold(writer: &mut Writer<'_>, s: impl fmt::Display) -> fmt::Result {
if writer.has_ansi_escapes() {
write!(writer, "\x1b[1m{}\x1b[0m", s)
write!(writer, "{}", Paint::new(s).bold())
} else {
write!(writer, "{}", s)
}

View File

@@ -1,9 +1,15 @@
pub mod app;
pub mod banner;
pub mod bot;
pub mod cli;
pub mod config;
pub mod data;
pub mod error;
pub mod formatter;
pub mod logging;
pub mod scraper;
pub mod services;
pub mod signals;
pub mod state;
pub mod utils;
pub mod web;

47
src/logging.rs Normal file
View File

@@ -0,0 +1,47 @@
use crate::cli::TracingFormat;
use crate::config::Config;
use crate::formatter;
use tracing_subscriber::fmt::format::JsonFields;
use tracing_subscriber::{EnvFilter, FmtSubscriber};
/// Configure and initialize logging for the application
pub fn setup_logging(config: &Config, tracing_format: TracingFormat) {
// Configure logging based on config
// Note: Even when base_level is trace or debug, we suppress trace logs from noisy
// infrastructure modules to keep output readable. These modules use debug for important
// events and trace only for very detailed debugging.
let filter = EnvFilter::try_from_default_env().unwrap_or_else(|_| {
let base_level = &config.log_level;
EnvFilter::new(format!(
"warn,banner={},banner::rate_limiter=warn,banner::session=debug,banner::rate_limit_middleware=warn,banner::middleware=debug",
base_level
))
});
// Select formatter based on CLI args
let use_pretty = match tracing_format {
TracingFormat::Pretty => true,
TracingFormat::Json => false,
};
let subscriber: Box<dyn tracing::Subscriber + Send + Sync> = if use_pretty {
Box::new(
FmtSubscriber::builder()
.with_target(true)
.event_format(formatter::CustomPrettyFormatter)
.with_env_filter(filter)
.finish(),
)
} else {
Box::new(
FmtSubscriber::builder()
.with_target(true)
.event_format(formatter::CustomJsonFormatter)
.fmt_fields(JsonFields::new())
.with_env_filter(filter)
.finish(),
)
};
tracing::subscriber::set_global_default(subscriber).expect("setting default subscriber failed");
}

View File

@@ -1,147 +1,27 @@
use crate::app::App;
use crate::cli::{Args, ServiceName, determine_enabled_services};
use crate::logging::setup_logging;
use clap::Parser;
use figment::value::UncasedStr;
use num_format::{Locale, ToFormattedString};
use serenity::all::{ActivityData, ClientBuilder, Context, GatewayIntents};
use tokio::signal;
use tracing::{error, info, warn};
use tracing_subscriber::{EnvFilter, FmtSubscriber};
use crate::banner::BannerApi;
use crate::bot::{Data, get_commands};
use crate::config::Config;
use crate::scraper::ScraperService;
use crate::services::manager::ServiceManager;
use crate::services::{ServiceResult, bot::BotService, web::WebService};
use crate::state::AppState;
use crate::web::routes::BannerState;
use figment::{Figment, providers::Env};
use sqlx::postgres::PgPoolOptions;
use std::sync::Arc;
use std::process::ExitCode;
use tracing::info;
mod app;
mod banner;
mod bot;
mod cli;
mod config;
mod data;
mod error;
mod formatter;
mod logging;
mod scraper;
mod services;
mod signals;
mod state;
mod web;
#[cfg(debug_assertions)]
const DEFAULT_TRACING_FORMAT: TracingFormat = TracingFormat::Pretty;
#[cfg(not(debug_assertions))]
const DEFAULT_TRACING_FORMAT: TracingFormat = TracingFormat::Json;
/// Banner Discord Bot - Course availability monitoring
///
/// This application runs multiple services that can be controlled via CLI arguments:
/// - bot: Discord bot for course monitoring commands
/// - web: HTTP server for web interface and API
/// - scraper: Background service for scraping course data
///
/// Use --services to specify which services to run, or --disable-services to exclude specific services.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Args {
/// Log formatter to use
#[arg(long, value_enum, default_value_t = DEFAULT_TRACING_FORMAT)]
tracing: TracingFormat,
/// Services to run (comma-separated). Default: all services
///
/// Examples:
/// --services bot,web # Run only bot and web services
/// --services scraper # Run only the scraper service
#[arg(long, value_delimiter = ',', conflicts_with = "disable_services")]
services: Option<Vec<ServiceName>>,
/// Services to disable (comma-separated)
///
/// Examples:
/// --disable-services bot # Run web and scraper only
/// --disable-services bot,web # Run only the scraper service
#[arg(long, value_delimiter = ',', conflicts_with = "services")]
disable_services: Option<Vec<ServiceName>>,
}
#[derive(clap::ValueEnum, Clone, Debug)]
enum TracingFormat {
/// Use pretty formatter (default in debug mode)
Pretty,
/// Use JSON formatter (default in release mode)
Json,
}
#[derive(clap::ValueEnum, Clone, Debug, PartialEq)]
enum ServiceName {
/// Discord bot for course monitoring commands
Bot,
/// HTTP server for web interface and API
Web,
/// Background service for scraping course data
Scraper,
}
impl ServiceName {
/// Get all available services
fn all() -> Vec<ServiceName> {
vec![ServiceName::Bot, ServiceName::Web, ServiceName::Scraper]
}
/// Convert to string for service registration
fn as_str(&self) -> &'static str {
match self {
ServiceName::Bot => "bot",
ServiceName::Web => "web",
ServiceName::Scraper => "scraper",
}
}
}
/// Determine which services should be enabled based on CLI arguments
fn determine_enabled_services(args: &Args) -> Result<Vec<ServiceName>, anyhow::Error> {
match (&args.services, &args.disable_services) {
(Some(services), None) => {
// User specified which services to run
Ok(services.clone())
}
(None, Some(disabled)) => {
// User specified which services to disable
let enabled: Vec<ServiceName> = ServiceName::all()
.into_iter()
.filter(|s| !disabled.contains(s))
.collect();
Ok(enabled)
}
(None, None) => {
// Default: run all services
Ok(ServiceName::all())
}
(Some(_), Some(_)) => {
// This should be prevented by clap's conflicts_with, but just in case
Err(anyhow::anyhow!(
"Cannot specify both --services and --disable-services"
))
}
}
}
async fn update_bot_status(ctx: &Context, app_state: &AppState) -> Result<(), anyhow::Error> {
let course_count = app_state.get_course_count().await?;
ctx.set_activity(Some(ActivityData::playing(format!(
"Querying {:} classes",
course_count.to_formatted_string(&Locale::en)
))));
tracing::info!(course_count = course_count, "Updated bot status");
Ok(())
}
#[tokio::main]
async fn main() {
async fn main() -> ExitCode {
dotenvy::dotenv().ok();
// Parse CLI arguments
@@ -156,51 +36,11 @@ async fn main() {
"services configuration loaded"
);
// Load configuration first to get log level
let config: Config = Figment::new()
.merge(Env::raw().map(|k| {
if k == UncasedStr::new("RAILWAY_DEPLOYMENT_DRAINING_SECONDS") {
"SHUTDOWN_TIMEOUT".into()
} else {
k.into()
}
}))
.extract()
.expect("Failed to load config");
// Create and initialize the application
let mut app = App::new().await.expect("Failed to initialize application");
// Configure logging based on config
let filter = EnvFilter::try_from_default_env().unwrap_or_else(|_| {
let base_level = &config.log_level;
EnvFilter::new(format!(
"warn,banner={},banner::rate_limiter=warn,banner::session=warn,banner::rate_limit_middleware=warn",
base_level
))
});
// Select formatter based on CLI args
let use_pretty = match args.tracing {
TracingFormat::Pretty => true,
TracingFormat::Json => false,
};
let subscriber: Box<dyn tracing::Subscriber + Send + Sync> = if use_pretty {
Box::new(
FmtSubscriber::builder()
.with_target(true)
.event_format(formatter::CustomPrettyFormatter)
.with_env_filter(filter)
.finish(),
)
} else {
Box::new(
FmtSubscriber::builder()
.with_target(true)
.event_format(formatter::CustomJsonFormatter)
.with_env_filter(filter)
.finish(),
)
};
tracing::subscriber::set_global_default(subscriber).expect("setting default subscriber failed");
// Setup logging
setup_logging(app.config(), args.tracing);
// Log application startup context
info!(
@@ -213,274 +53,18 @@ async fn main() {
"starting banner"
);
// Create database connection pool
let db_pool = PgPoolOptions::new()
.max_connections(10)
.connect(&config.database_url)
.await
.expect("Failed to create database pool");
// Setup services (web, scraper)
app.setup_services(&enabled_services)
.expect("Failed to setup services");
info!(
port = config.port,
shutdown_timeout = format!("{:.2?}", config.shutdown_timeout),
banner_base_url = config.banner_base_url,
"configuration loaded"
);
// Create BannerApi and AppState
let banner_api = BannerApi::new_with_config(
config.banner_base_url.clone(),
config.rate_limiting.clone().into(),
)
.expect("Failed to create BannerApi");
let banner_api_arc = Arc::new(banner_api);
let app_state = AppState::new(banner_api_arc.clone(), db_pool.clone());
// Create BannerState for web service
let banner_state = BannerState {
api: banner_api_arc.clone(),
};
// Configure the client with your Discord bot token in the environment
let intents = GatewayIntents::non_privileged();
let bot_target_guild = config.bot_target_guild;
let framework = poise::Framework::builder()
.options(poise::FrameworkOptions {
commands: get_commands(),
pre_command: |ctx| {
Box::pin(async move {
let content = match ctx {
poise::Context::Application(_) => ctx.invocation_string(),
poise::Context::Prefix(prefix) => prefix.msg.content.to_string(),
};
let channel_name = ctx
.channel_id()
.name(ctx.http())
.await
.unwrap_or("unknown".to_string());
let span = tracing::Span::current();
span.record("command_name", ctx.command().qualified_name.as_str());
span.record("invocation", ctx.invocation_string());
span.record("msg.content", content.as_str());
span.record("msg.author", ctx.author().tag().as_str());
span.record("msg.id", ctx.id());
span.record("msg.channel_id", ctx.channel_id().get());
span.record("msg.channel", channel_name.as_str());
tracing::info!(
command_name = ctx.command().qualified_name.as_str(),
invocation = ctx.invocation_string(),
msg.content = %content,
msg.author = %ctx.author().tag(),
msg.author_id = %ctx.author().id,
msg.id = %ctx.id(),
msg.channel = %channel_name.as_str(),
msg.channel_id = %ctx.channel_id(),
"{} invoked by {}",
ctx.command().name,
ctx.author().tag()
);
})
},
on_error: |error| {
Box::pin(async move {
if let Err(e) = poise::builtins::on_error(error).await {
tracing::error!(error = %e, "Fatal error while sending error message");
}
// error!(error = ?error, "command error");
})
},
..Default::default()
})
.setup(move |ctx, _ready, framework| {
let app_state = app_state.clone();
Box::pin(async move {
poise::builtins::register_in_guild(
ctx,
&framework.options().commands,
bot_target_guild.into(),
)
.await?;
poise::builtins::register_globally(ctx, &framework.options().commands).await?;
// Start status update task
let status_app_state = app_state.clone();
let status_ctx = ctx.clone();
tokio::spawn(async move {
let mut interval = tokio::time::interval(std::time::Duration::from_secs(30));
// Update status immediately on startup
if let Err(e) = update_bot_status(&status_ctx, &status_app_state).await {
tracing::error!(error = %e, "Failed to update status on startup");
}
loop {
interval.tick().await;
if let Err(e) = update_bot_status(&status_ctx, &status_app_state).await {
tracing::error!(error = %e, "Failed to update bot status");
}
}
});
Ok(Data { app_state })
})
})
.build();
let client = ClientBuilder::new(config.bot_token, intents)
.framework(framework)
.await
.expect("Failed to build client");
// Extract shutdown timeout before moving config
let shutdown_timeout = config.shutdown_timeout;
let port = config.port;
// Create service manager
let mut service_manager = ServiceManager::new();
// Register enabled services with the manager
// Setup bot service if enabled
if enabled_services.contains(&ServiceName::Bot) {
let bot_service = Box::new(BotService::new(client));
service_manager.register_service(ServiceName::Bot.as_str(), bot_service);
}
if enabled_services.contains(&ServiceName::Web) {
let web_service = Box::new(WebService::new(port, banner_state));
service_manager.register_service(ServiceName::Web.as_str(), web_service);
}
if enabled_services.contains(&ServiceName::Scraper) {
let scraper_service =
Box::new(ScraperService::new(db_pool.clone(), banner_api_arc.clone()));
service_manager.register_service(ServiceName::Scraper.as_str(), scraper_service);
}
// Check if any services are enabled
if !service_manager.has_services() {
error!("No services enabled. Cannot start application.");
std::process::exit(1);
}
// Spawn all registered services
service_manager.spawn_all();
// Set up signal handling for both SIGINT (Ctrl+C) and SIGTERM
let ctrl_c = async {
signal::ctrl_c()
app.setup_bot_service()
.await
.expect("Failed to install CTRL+C signal handler");
info!("received ctrl+c, gracefully shutting down...");
};
#[cfg(unix)]
let sigterm = async {
use tokio::signal::unix::{SignalKind, signal};
let mut sigterm_stream =
signal(SignalKind::terminate()).expect("Failed to install SIGTERM signal handler");
sigterm_stream.recv().await;
info!("received SIGTERM, gracefully shutting down...");
};
#[cfg(not(unix))]
let sigterm = async {
// On non-Unix systems, create a future that never completes
// This ensures the select! macro works correctly
std::future::pending::<()>().await;
};
// Main application loop - wait for services or signals
let mut exit_code = 0;
tokio::select! {
(service_name, result) = service_manager.run() => {
// A service completed unexpectedly
match result {
ServiceResult::GracefulShutdown => {
info!(service = service_name, "service completed gracefully");
}
ServiceResult::NormalCompletion => {
warn!(service = service_name, "service completed unexpectedly");
exit_code = 1;
}
ServiceResult::Error(e) => {
error!(service = service_name, error = ?e, "service failed");
exit_code = 1;
}
}
// Shutdown remaining services
match service_manager.shutdown(shutdown_timeout).await {
Ok(elapsed) => {
info!(
remaining = format!("{:.2?}", shutdown_timeout - elapsed),
"graceful shutdown complete"
);
}
Err(pending_services) => {
warn!(
pending_count = pending_services.len(),
pending_services = ?pending_services,
"graceful shutdown elapsed - {} service(s) did not complete",
pending_services.len()
);
// Non-zero exit code, default to 2 if not set
exit_code = if exit_code == 0 { 2 } else { exit_code };
}
}
}
_ = ctrl_c => {
// User requested shutdown via Ctrl+C
info!("user requested shutdown via ctrl+c");
match service_manager.shutdown(shutdown_timeout).await {
Ok(elapsed) => {
info!(
remaining = format!("{:.2?}", shutdown_timeout - elapsed),
"graceful shutdown complete"
);
info!("graceful shutdown complete");
}
Err(pending_services) => {
warn!(
pending_count = pending_services.len(),
pending_services = ?pending_services,
"graceful shutdown elapsed - {} service(s) did not complete",
pending_services.len()
);
exit_code = 2;
}
}
}
_ = sigterm => {
// System requested shutdown via SIGTERM
info!("system requested shutdown via SIGTERM");
match service_manager.shutdown(shutdown_timeout).await {
Ok(elapsed) => {
info!(
remaining = format!("{:.2?}", shutdown_timeout - elapsed),
"graceful shutdown complete"
);
info!("graceful shutdown complete");
}
Err(pending_services) => {
warn!(
pending_count = pending_services.len(),
pending_services = ?pending_services,
"graceful shutdown elapsed - {} service(s) did not complete",
pending_services.len()
);
exit_code = 2;
}
}
}
.expect("Failed to setup bot service");
}
info!(exit_code, "application shutdown complete");
std::process::exit(exit_code);
// Start all services and run the application
app.start_services();
app.run().await
}

View File

@@ -12,7 +12,6 @@ use std::fmt;
pub enum JobParseError {
InvalidJson(serde_json::Error),
UnsupportedTargetType(TargetType),
MissingRequiredField(String),
}
impl fmt::Display for JobParseError {
@@ -22,9 +21,6 @@ impl fmt::Display for JobParseError {
JobParseError::UnsupportedTargetType(t) => {
write!(f, "Unsupported target type: {:?}", t)
}
JobParseError::MissingRequiredField(field) => {
write!(f, "Missing required field: {}", field)
}
}
}
}
@@ -67,6 +63,7 @@ impl std::error::Error for JobError {
#[async_trait::async_trait]
pub trait Job: Send + Sync {
/// The target type this job handles
#[allow(dead_code)]
fn target_type(&self) -> TargetType;
/// Process the job with the given API client and database pool
@@ -99,14 +96,9 @@ impl JobType {
}
/// Convert to a Job trait object
pub fn as_job(self) -> Box<dyn Job> {
pub fn boxed(self) -> Box<dyn Job> {
match self {
JobType::Subject(job) => Box::new(job),
}
}
}
/// Helper function to create a subject job
pub fn create_subject_job(subject: String) -> JobType {
JobType::Subject(subject::SubjectJob::new(subject))
}

View File

@@ -1,10 +1,11 @@
use super::Job;
use crate::banner::{BannerApi, Course, SearchQuery, Term};
use crate::banner::{BannerApi, SearchQuery, Term};
use crate::data::batch::batch_upsert_courses;
use crate::data::models::TargetType;
use crate::error::Result;
use serde::{Deserialize, Serialize};
use sqlx::PgPool;
use tracing::{debug, info, trace};
use tracing::{debug, info};
/// Job implementation for scraping subject data
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -24,9 +25,9 @@ impl Job for SubjectJob {
TargetType::Subject
}
#[tracing::instrument(skip(self, banner_api, db_pool), fields(subject = %self.subject))]
async fn process(&self, banner_api: &BannerApi, db_pool: &PgPool) -> Result<()> {
let subject_code = &self.subject;
debug!(subject = subject_code, "Processing subject job");
// Get the current term
let term = Term::get_current().inner().to_string();
@@ -42,9 +43,7 @@ impl Job for SubjectJob {
count = courses_from_api.len(),
"Found courses"
);
for course in courses_from_api {
self.upsert_course(&course, db_pool).await?;
}
batch_upsert_courses(&courses_from_api, db_pool).await?;
}
debug!(subject = subject_code, "Subject job completed");
@@ -55,39 +54,3 @@ impl Job for SubjectJob {
format!("Scrape subject: {}", self.subject)
}
}
impl SubjectJob {
async fn upsert_course(&self, course: &Course, db_pool: &PgPool) -> Result<()> {
sqlx::query(
r#"
INSERT INTO courses (crn, subject, course_number, title, term_code, enrollment, max_enrollment, wait_count, wait_capacity, last_scraped_at)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
ON CONFLICT (crn, term_code) DO UPDATE SET
subject = EXCLUDED.subject,
course_number = EXCLUDED.course_number,
title = EXCLUDED.title,
enrollment = EXCLUDED.enrollment,
max_enrollment = EXCLUDED.max_enrollment,
wait_count = EXCLUDED.wait_count,
wait_capacity = EXCLUDED.wait_capacity,
last_scraped_at = EXCLUDED.last_scraped_at
"#,
)
.bind(&course.course_reference_number)
.bind(&course.subject)
.bind(&course.course_number)
.bind(&course.course_title)
.bind(&course.term)
.bind(course.enrollment)
.bind(course.maximum_enrollment)
.bind(course.wait_count)
.bind(course.wait_capacity)
.bind(chrono::Utc::now())
.execute(db_pool)
.await
.map(|result| {
trace!(subject = course.subject, crn = course.course_reference_number, result = ?result, "Course upserted");
})
.map_err(|e| anyhow::anyhow!("Failed to upsert course: {e}"))
}
}

View File

@@ -3,14 +3,15 @@ pub mod scheduler;
pub mod worker;
use crate::banner::BannerApi;
use crate::services::Service;
use sqlx::PgPool;
use std::sync::Arc;
use tokio::sync::broadcast;
use tokio::task::JoinHandle;
use tracing::info;
use tracing::{info, warn};
use self::scheduler::Scheduler;
use self::worker::Worker;
use crate::services::Service;
/// The main service that will be managed by the application's `ServiceManager`.
///
@@ -21,6 +22,7 @@ pub struct ScraperService {
banner_api: Arc<BannerApi>,
scheduler_handle: Option<JoinHandle<()>>,
worker_handles: Vec<JoinHandle<()>>,
shutdown_tx: Option<broadcast::Sender<()>>,
}
impl ScraperService {
@@ -31,6 +33,7 @@ impl ScraperService {
banner_api,
scheduler_handle: None,
worker_handles: Vec::new(),
shutdown_tx: None,
}
}
@@ -38,9 +41,14 @@ impl ScraperService {
pub fn start(&mut self) {
info!("ScraperService starting");
// Create shutdown channel
let (shutdown_tx, _) = broadcast::channel(1);
self.shutdown_tx = Some(shutdown_tx.clone());
let scheduler = Scheduler::new(self.db_pool.clone(), self.banner_api.clone());
let shutdown_rx = shutdown_tx.subscribe();
let scheduler_handle = tokio::spawn(async move {
scheduler.run().await;
scheduler.run(shutdown_rx).await;
});
self.scheduler_handle = Some(scheduler_handle);
info!("Scheduler task spawned");
@@ -48,8 +56,9 @@ impl ScraperService {
let worker_count = 4; // This could be configurable
for i in 0..worker_count {
let worker = Worker::new(i, self.db_pool.clone(), self.banner_api.clone());
let shutdown_rx = shutdown_tx.subscribe();
let worker_handle = tokio::spawn(async move {
worker.run().await;
worker.run(shutdown_rx).await;
});
self.worker_handles.push(worker_handle);
}
@@ -58,18 +67,6 @@ impl ScraperService {
"Spawned worker tasks"
);
}
/// Signals all child tasks to gracefully shut down.
pub async fn shutdown(&mut self) {
info!("Shutting down scraper service");
if let Some(handle) = self.scheduler_handle.take() {
handle.abort();
}
for handle in self.worker_handles.drain(..) {
handle.abort();
}
info!("Scraper service shutdown");
}
}
#[async_trait::async_trait]
@@ -85,7 +82,35 @@ impl Service for ScraperService {
}
async fn shutdown(&mut self) -> Result<(), anyhow::Error> {
self.shutdown().await;
info!("Shutting down scraper service");
// Send shutdown signal to all tasks
if let Some(shutdown_tx) = self.shutdown_tx.take() {
let _ = shutdown_tx.send(());
} else {
warn!("No shutdown channel found for scraper service");
return Err(anyhow::anyhow!("No shutdown channel available"));
}
// Collect all handles
let mut all_handles = Vec::new();
if let Some(handle) = self.scheduler_handle.take() {
all_handles.push(handle);
}
all_handles.append(&mut self.worker_handles);
// Wait for all tasks to complete (no internal timeout - let ServiceManager handle it)
let results = futures::future::join_all(all_handles).await;
let failed = results.iter().filter(|r| r.is_err()).count();
if failed > 0 {
warn!(
failed_count = failed,
"Some scraper tasks panicked during shutdown"
);
return Err(anyhow::anyhow!("{} task(s) panicked", failed));
}
info!("All scraper tasks shutdown gracefully");
Ok(())
}
}

View File

@@ -6,8 +6,10 @@ use serde_json::json;
use sqlx::PgPool;
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::broadcast;
use tokio::time;
use tracing::{debug, error, info, trace};
use tokio_util::sync::CancellationToken;
use tracing::{debug, error, info, warn};
/// Periodically analyzes data and enqueues prioritized scrape jobs.
pub struct Scheduler {
@@ -23,31 +25,92 @@ impl Scheduler {
}
}
/// Runs the scheduler's main loop.
pub async fn run(&self) {
/// Runs the scheduler's main loop with graceful shutdown support.
///
/// The scheduler wakes up every 60 seconds to analyze data and enqueue jobs.
/// When a shutdown signal is received:
/// 1. Any in-progress scheduling work is gracefully cancelled via CancellationToken
/// 2. The scheduler waits up to 5 seconds for work to complete
/// 3. If timeout occurs, the task is abandoned (it will be aborted when dropped)
///
/// This ensures that shutdown is responsive even if scheduling work is blocked.
pub async fn run(&self, mut shutdown_rx: broadcast::Receiver<()>) {
info!("Scheduler service started");
let mut interval = time::interval(Duration::from_secs(60)); // Runs every minute
let work_interval = Duration::from_secs(60);
let mut next_run = time::Instant::now();
let mut current_work: Option<(tokio::task::JoinHandle<()>, CancellationToken)> = None;
loop {
interval.tick().await;
// Scheduler analyzing data...
if let Err(e) = self.schedule_jobs().await {
error!(error = ?e, "Failed to schedule jobs");
tokio::select! {
_ = time::sleep_until(next_run) => {
let cancel_token = CancellationToken::new();
// Spawn work in separate task to allow graceful cancellation during shutdown.
// Without this, shutdown would have to wait for the full scheduling cycle.
let work_handle = tokio::spawn({
let db_pool = self.db_pool.clone();
let banner_api = self.banner_api.clone();
let cancel_token = cancel_token.clone();
async move {
tokio::select! {
result = Self::schedule_jobs_impl(&db_pool, &banner_api) => {
if let Err(e) = result {
error!(error = ?e, "Failed to schedule jobs");
}
}
_ = cancel_token.cancelled() => {
debug!("Scheduling work cancelled gracefully");
}
}
}
});
current_work = Some((work_handle, cancel_token));
next_run = time::Instant::now() + work_interval;
}
_ = shutdown_rx.recv() => {
info!("Scheduler received shutdown signal");
if let Some((handle, cancel_token)) = current_work.take() {
cancel_token.cancel();
// Wait briefly for graceful completion
if tokio::time::timeout(Duration::from_secs(5), handle).await.is_err() {
warn!("Scheduling work did not complete within 5s, abandoning");
} else {
debug!("Scheduling work completed gracefully");
}
}
info!("Scheduler exiting gracefully");
break;
}
}
}
}
/// The core logic for deciding what jobs to create.
async fn schedule_jobs(&self) -> Result<()> {
/// Core scheduling logic that analyzes data and creates scrape jobs.
///
/// Strategy:
/// 1. Fetch all subjects for the current term from Banner API
/// 2. Query existing jobs in a single batch query
/// 3. Create jobs only for subjects that don't have pending jobs
///
/// This is a static method (not &self) to allow it to be called from spawned tasks.
#[tracing::instrument(skip_all, fields(term))]
async fn schedule_jobs_impl(db_pool: &PgPool, banner_api: &BannerApi) -> Result<()> {
// For now, we will implement a simple baseline scheduling strategy:
// 1. Get a list of all subjects from the Banner API.
// 2. Query existing jobs for all subjects in a single query.
// 3. Create new jobs only for subjects that don't have existing jobs.
let term = Term::get_current().inner().to_string();
tracing::Span::current().record("term", term.as_str());
debug!(term = term, "Enqueuing subject jobs");
let subjects = self.banner_api.get_subjects("", &term, 1, 500).await?;
let subjects = banner_api.get_subjects("", &term, 1, 500).await?;
debug!(
subject_count = subjects.len(),
"Retrieved subjects from API"
@@ -61,12 +124,12 @@ impl Scheduler {
// Query existing jobs for all subjects in a single query
let existing_jobs: Vec<(serde_json::Value,)> = sqlx::query_as(
"SELECT target_payload FROM scrape_jobs
"SELECT target_payload FROM scrape_jobs
WHERE target_type = $1 AND target_payload = ANY($2) AND locked_at IS NULL",
)
.bind(TargetType::Subject)
.bind(&subject_payloads)
.fetch_all(&self.db_pool)
.fetch_all(db_pool)
.await?;
// Convert to a HashSet for efficient lookup
@@ -76,6 +139,7 @@ impl Scheduler {
.collect();
// Filter out subjects that already have jobs and prepare new jobs
let mut skipped_count = 0;
let new_jobs: Vec<_> = subjects
.into_iter()
.filter_map(|subject| {
@@ -84,7 +148,7 @@ impl Scheduler {
let payload_str = payload.to_string();
if existing_payloads.contains(&payload_str) {
trace!(subject = subject.code, "Job already exists, skipping");
skipped_count += 1;
None
} else {
Some((payload, subject.code))
@@ -92,10 +156,14 @@ impl Scheduler {
})
.collect();
if skipped_count > 0 {
debug!(count = skipped_count, "Skipped subjects with existing jobs");
}
// Insert all new jobs in a single batch
if !new_jobs.is_empty() {
let now = chrono::Utc::now();
let mut tx = self.db_pool.begin().await?;
let mut tx = db_pool.begin().await?;
for (payload, subject_code) in new_jobs {
sqlx::query(

View File

@@ -5,8 +5,9 @@ use crate::scraper::jobs::{JobError, JobType};
use sqlx::PgPool;
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::broadcast;
use tokio::time;
use tracing::{debug, error, info, trace, warn};
use tracing::{Instrument, debug, error, info, trace, warn};
/// A single worker instance.
///
@@ -28,79 +29,52 @@ impl Worker {
}
/// Runs the worker's main loop.
pub async fn run(&self) {
info!(worker_id = self.id, "Worker started.");
loop {
match self.fetch_and_lock_job().await {
Ok(Some(job)) => {
let job_id = job.id;
debug!(worker_id = self.id, job_id = job.id, "Processing job");
match self.process_job(job).await {
Ok(()) => {
debug!(worker_id = self.id, job_id, "Job completed");
// If successful, delete the job.
if let Err(delete_err) = self.delete_job(job_id).await {
error!(
worker_id = self.id,
job_id,
?delete_err,
"Failed to delete job"
);
}
}
Err(JobError::Recoverable(e)) => {
// Check if the error is due to an invalid session
if let Some(BannerApiError::InvalidSession(_)) =
e.downcast_ref::<BannerApiError>()
{
warn!(
worker_id = self.id,
job_id, "Invalid session detected. Forcing session refresh."
);
} else {
error!(worker_id = self.id, job_id, error = ?e, "Failed to process job");
}
pub async fn run(&self, mut shutdown_rx: broadcast::Receiver<()>) {
info!(worker_id = self.id, "Worker started");
// Unlock the job so it can be retried
if let Err(unlock_err) = self.unlock_job(job_id).await {
error!(
worker_id = self.id,
job_id,
?unlock_err,
"Failed to unlock job"
);
}
loop {
// Fetch and lock a job, racing against shutdown signal
let job = tokio::select! {
_ = shutdown_rx.recv() => {
info!(worker_id = self.id, "Worker received shutdown signal, exiting gracefully");
break;
}
result = self.fetch_and_lock_job() => {
match result {
Ok(Some(job)) => job,
Ok(None) => {
trace!(worker_id = self.id, "No jobs available, waiting");
time::sleep(Duration::from_secs(5)).await;
continue;
}
Err(JobError::Unrecoverable(e)) => {
error!(
worker_id = self.id,
job_id,
error = ?e,
"Job corrupted, deleting"
);
// Parse errors are unrecoverable - delete the job
if let Err(delete_err) = self.delete_job(job_id).await {
error!(
worker_id = self.id,
job_id,
?delete_err,
"Failed to delete corrupted job"
);
}
Err(e) => {
warn!(worker_id = self.id, error = ?e, "Failed to fetch job, waiting");
time::sleep(Duration::from_secs(10)).await;
continue;
}
}
}
Ok(None) => {
// No job found, wait for a bit before polling again.
trace!(worker_id = self.id, "No jobs available, waiting");
time::sleep(Duration::from_secs(5)).await;
};
let job_id = job.id;
let retry_count = job.retry_count;
let max_retries = job.max_retries;
let start = std::time::Instant::now();
// Process the job, racing against shutdown signal
let process_result = tokio::select! {
_ = shutdown_rx.recv() => {
self.handle_shutdown_during_processing(job_id).await;
break;
}
Err(e) => {
warn!(worker_id = self.id, error = ?e, "Failed to fetch job");
// Wait before retrying to avoid spamming errors.
time::sleep(Duration::from_secs(10)).await;
}
}
result = self.process_job(job) => result
};
let duration = start.elapsed();
// Handle the job processing result
self.handle_job_result(job_id, retry_count, max_retries, process_result, duration)
.await;
}
}
@@ -135,22 +109,33 @@ impl Worker {
.map_err(|e| JobError::Unrecoverable(anyhow::anyhow!(e)))?; // Parse errors are unrecoverable
// Get the job implementation
let job_impl = job_type.as_job();
let job_impl = job_type.boxed();
debug!(
worker_id = self.id,
// Create span with job context
let span = tracing::debug_span!(
"process_job",
job_id = job.id,
description = job_impl.description(),
"Processing job"
job_type = job_impl.description()
);
// Process the job - API errors are recoverable
job_impl
.process(&self.banner_api, &self.db_pool)
.await
.map_err(JobError::Recoverable)?;
async move {
debug!(
worker_id = self.id,
job_id = job.id,
description = job_impl.description(),
"Processing job"
);
Ok(())
// Process the job - API errors are recoverable
job_impl
.process(&self.banner_api, &self.db_pool)
.await
.map_err(JobError::Recoverable)?;
Ok(())
}
.instrument(span)
.await
}
async fn delete_job(&self, job_id: i32) -> Result<()> {
@@ -166,7 +151,150 @@ impl Worker {
.bind(job_id)
.execute(&self.db_pool)
.await?;
info!(worker_id = self.id, job_id, "Job unlocked for retry");
Ok(())
}
async fn unlock_and_increment_retry(&self, job_id: i32, max_retries: i32) -> Result<bool> {
let result = sqlx::query_scalar::<_, Option<i32>>(
"UPDATE scrape_jobs
SET locked_at = NULL, retry_count = retry_count + 1
WHERE id = $1
RETURNING CASE WHEN retry_count + 1 < $2 THEN retry_count + 1 ELSE NULL END",
)
.bind(job_id)
.bind(max_retries)
.fetch_one(&self.db_pool)
.await?;
Ok(result.is_some())
}
/// Handle shutdown signal received during job processing
async fn handle_shutdown_during_processing(&self, job_id: i32) {
info!(
worker_id = self.id,
job_id, "Shutdown received during job processing"
);
if let Err(e) = self.unlock_job(job_id).await {
warn!(
worker_id = self.id,
job_id,
error = ?e,
"Failed to unlock job during shutdown"
);
} else {
debug!(worker_id = self.id, job_id, "Job unlocked during shutdown");
}
info!(worker_id = self.id, "Worker exiting gracefully");
}
/// Handle the result of job processing
async fn handle_job_result(
&self,
job_id: i32,
retry_count: i32,
max_retries: i32,
result: Result<(), JobError>,
duration: std::time::Duration,
) {
match result {
Ok(()) => {
debug!(
worker_id = self.id,
job_id,
duration_ms = duration.as_millis(),
"Job completed successfully"
);
if let Err(e) = self.delete_job(job_id).await {
error!(worker_id = self.id, job_id, error = ?e, "Failed to delete completed job");
}
}
Err(JobError::Recoverable(e)) => {
self.handle_recoverable_error(job_id, retry_count, max_retries, e, duration)
.await;
}
Err(JobError::Unrecoverable(e)) => {
error!(
worker_id = self.id,
job_id,
duration_ms = duration.as_millis(),
error = ?e,
"Job corrupted, deleting"
);
if let Err(e) = self.delete_job(job_id).await {
error!(worker_id = self.id, job_id, error = ?e, "Failed to delete corrupted job");
}
}
}
}
/// Handle recoverable errors by logging appropriately and unlocking the job
async fn handle_recoverable_error(
&self,
job_id: i32,
retry_count: i32,
max_retries: i32,
e: anyhow::Error,
duration: std::time::Duration,
) {
let next_attempt = retry_count.saturating_add(1);
let remaining_retries = max_retries.saturating_sub(next_attempt);
// Log the error appropriately based on type
if let Some(BannerApiError::InvalidSession(_)) = e.downcast_ref::<BannerApiError>() {
warn!(
worker_id = self.id,
job_id,
duration_ms = duration.as_millis(),
retry_attempt = next_attempt,
max_retries = max_retries,
remaining_retries = remaining_retries,
"Invalid session detected, will retry"
);
} else {
error!(
worker_id = self.id,
job_id,
duration_ms = duration.as_millis(),
retry_attempt = next_attempt,
max_retries = max_retries,
remaining_retries = remaining_retries,
error = ?e,
"Failed to process job, will retry"
);
}
// Atomically unlock and increment retry count, checking if retry is allowed
match self.unlock_and_increment_retry(job_id, max_retries).await {
Ok(can_retry) if can_retry => {
info!(
worker_id = self.id,
job_id,
retry_attempt = next_attempt,
remaining_retries = remaining_retries,
"Job unlocked for retry"
);
}
Ok(_) => {
// Max retries exceeded (detected atomically)
error!(
worker_id = self.id,
job_id,
duration_ms = duration.as_millis(),
retry_count = next_attempt,
max_retries = max_retries,
error = ?e,
"Job failed permanently (max retries exceeded), deleting"
);
if let Err(e) = self.delete_job(job_id).await {
error!(worker_id = self.id, job_id, error = ?e, "Failed to delete failed job");
}
}
Err(e) => {
error!(worker_id = self.id, job_id, error = ?e, "Failed to unlock and increment retry count");
}
}
}
}

View File

@@ -1,20 +1,193 @@
use super::Service;
use crate::bot::{Data, get_commands};
use crate::config::Config;
use crate::state::AppState;
use num_format::{Locale, ToFormattedString};
use serenity::Client;
use serenity::all::{ActivityData, ClientBuilder, GatewayIntents};
use std::sync::Arc;
use tracing::{error, warn};
use std::time::Duration;
use tokio::sync::{Mutex, broadcast};
use tokio::task::JoinHandle;
use tracing::{debug, error, info, warn};
/// Discord bot service implementation
pub struct BotService {
client: Client,
shard_manager: Arc<serenity::gateway::ShardManager>,
status_task_handle: Arc<Mutex<Option<JoinHandle<()>>>>,
status_shutdown_tx: Option<broadcast::Sender<()>>,
}
impl BotService {
pub fn new(client: Client) -> Self {
/// Create a new Discord bot client with full configuration
pub async fn create_client(
config: &Config,
app_state: AppState,
status_task_handle: Arc<Mutex<Option<JoinHandle<()>>>>,
status_shutdown_rx: broadcast::Receiver<()>,
) -> Result<Client, anyhow::Error> {
let intents = GatewayIntents::non_privileged();
let bot_target_guild = config.bot_target_guild;
let framework = poise::Framework::builder()
.options(poise::FrameworkOptions {
commands: get_commands(),
pre_command: |ctx| {
Box::pin(async move {
let content = match ctx {
poise::Context::Application(_) => ctx.invocation_string(),
poise::Context::Prefix(prefix) => prefix.msg.content.to_string(),
};
let channel_name = ctx
.channel_id()
.name(ctx.http())
.await
.unwrap_or("unknown".to_string());
let span = tracing::Span::current();
span.record("command_name", ctx.command().qualified_name.as_str());
span.record("invocation", ctx.invocation_string());
span.record("msg.content", content.as_str());
span.record("msg.author", ctx.author().tag().as_str());
span.record("msg.id", ctx.id());
span.record("msg.channel_id", ctx.channel_id().get());
span.record("msg.channel", channel_name.as_str());
tracing::info!(
command_name = ctx.command().qualified_name.as_str(),
invocation = ctx.invocation_string(),
msg.content = %content,
msg.author = %ctx.author().tag(),
msg.author_id = %ctx.author().id,
msg.id = %ctx.id(),
msg.channel = %channel_name.as_str(),
msg.channel_id = %ctx.channel_id(),
"{} invoked by {}",
ctx.command().name,
ctx.author().tag()
);
})
},
on_error: |error| {
Box::pin(async move {
if let Err(e) = poise::builtins::on_error(error).await {
tracing::error!(error = %e, "Fatal error while sending error message");
}
})
},
..Default::default()
})
.setup(move |ctx, _ready, framework| {
let app_state = app_state.clone();
let status_task_handle = status_task_handle.clone();
Box::pin(async move {
poise::builtins::register_in_guild(
ctx,
&framework.options().commands,
bot_target_guild.into(),
)
.await?;
poise::builtins::register_globally(ctx, &framework.options().commands).await?;
// Start status update task with shutdown support
let handle = Self::start_status_update_task(
ctx.clone(),
app_state.clone(),
status_shutdown_rx,
);
*status_task_handle.lock().await = Some(handle);
Ok(Data { app_state })
})
})
.build();
Ok(ClientBuilder::new(config.bot_token.clone(), intents)
.framework(framework)
.await?)
}
/// Start the status update task for the Discord bot with graceful shutdown support
fn start_status_update_task(
ctx: serenity::client::Context,
app_state: AppState,
mut shutdown_rx: broadcast::Receiver<()>,
) -> JoinHandle<()> {
tokio::spawn(async move {
let max_interval = Duration::from_secs(300); // 5 minutes
let base_interval = Duration::from_secs(30);
let mut interval = tokio::time::interval(base_interval);
let mut previous_course_count: Option<i64> = None;
// This runs once immediately on startup, then with adaptive intervals
loop {
tokio::select! {
_ = interval.tick() => {
// Get the course count, update the activity if it has changed/hasn't been set this session
let course_count = app_state.get_course_count().await.unwrap();
if previous_course_count.is_none() || previous_course_count != Some(course_count) {
ctx.set_activity(Some(ActivityData::playing(format!(
"Querying {:} classes",
course_count.to_formatted_string(&Locale::en)
))));
}
// Increase or reset the interval
interval = tokio::time::interval(
// Avoid logging the first 'change'
if course_count != previous_course_count.unwrap_or(0) {
if previous_course_count.is_some() {
debug!(
new_course_count = course_count,
last_interval = interval.period().as_secs(),
"Course count changed, resetting interval"
);
}
// Record the new course count
previous_course_count = Some(course_count);
// Reset to base interval
base_interval
} else {
// Increase interval by 10% (up to maximum)
let new_interval = interval.period().mul_f32(1.1).min(max_interval);
debug!(
current_course_count = course_count,
last_interval = interval.period().as_secs(),
new_interval = new_interval.as_secs(),
"Course count unchanged, increasing interval"
);
new_interval
},
);
// Reset the interval, otherwise it will tick again immediately
interval.reset();
}
_ = shutdown_rx.recv() => {
info!("Status update task received shutdown signal");
break;
}
}
}
})
}
pub fn new(
client: Client,
status_task_handle: Arc<Mutex<Option<JoinHandle<()>>>>,
status_shutdown_tx: broadcast::Sender<()>,
) -> Self {
let shard_manager = client.shard_manager.clone();
Self {
client,
shard_manager,
status_task_handle,
status_shutdown_tx: Some(status_shutdown_tx),
}
}
}
@@ -39,6 +212,28 @@ impl Service for BotService {
}
async fn shutdown(&mut self) -> Result<(), anyhow::Error> {
// Signal status update task to stop
if let Some(status_shutdown_tx) = self.status_shutdown_tx.take() {
let _ = status_shutdown_tx.send(());
}
// Wait for status update task to complete (with timeout)
let handle = self.status_task_handle.lock().await.take();
if let Some(handle) = handle {
match tokio::time::timeout(Duration::from_secs(2), handle).await {
Ok(Ok(())) => {
debug!("Status update task completed gracefully");
}
Ok(Err(e)) => {
warn!(error = ?e, "Status update task panicked");
}
Err(_) => {
warn!("Status update task did not complete within 2s timeout");
}
}
}
// Shutdown Discord shards
self.shard_manager.shutdown_all().await;
Ok(())
}

View File

@@ -1,15 +1,16 @@
use std::collections::HashMap;
use std::time::Duration;
use tokio::sync::broadcast;
use tokio::task::JoinHandle;
use tracing::{debug, error, info, trace, warn};
use tokio::sync::{broadcast, mpsc};
use tracing::{debug, info, trace, warn};
use crate::services::{Service, ServiceResult, run_service};
/// Manages multiple services and their lifecycle
pub struct ServiceManager {
registered_services: HashMap<String, Box<dyn Service>>,
running_services: HashMap<String, JoinHandle<ServiceResult>>,
service_handles: HashMap<String, tokio::task::AbortHandle>,
completion_rx: Option<mpsc::UnboundedReceiver<(String, ServiceResult)>>,
completion_tx: mpsc::UnboundedSender<(String, ServiceResult)>,
shutdown_tx: broadcast::Sender<()>,
}
@@ -22,9 +23,13 @@ impl Default for ServiceManager {
impl ServiceManager {
pub fn new() -> Self {
let (shutdown_tx, _) = broadcast::channel(1);
let (completion_tx, completion_rx) = mpsc::unbounded_channel();
Self {
registered_services: HashMap::new(),
running_services: HashMap::new(),
service_handles: HashMap::new(),
completion_rx: Some(completion_rx),
completion_tx,
shutdown_tx,
}
}
@@ -46,9 +51,20 @@ impl ServiceManager {
for (name, service) in self.registered_services.drain() {
let shutdown_rx = self.shutdown_tx.subscribe();
let handle = tokio::spawn(run_service(service, shutdown_rx));
let completion_tx = self.completion_tx.clone();
let name_clone = name.clone();
// Spawn service task
let handle = tokio::spawn(async move {
let result = run_service(service, shutdown_rx).await;
// Send completion notification
let _ = completion_tx.send((name_clone, result));
});
// Store abort handle for shutdown control
self.service_handles
.insert(name.clone(), handle.abort_handle());
debug!(service = name, id = ?handle.id(), "service spawned");
self.running_services.insert(name, handle);
}
info!(
@@ -62,7 +78,7 @@ impl ServiceManager {
/// Run all services until one completes or fails
/// Returns the first service that completes and its result
pub async fn run(&mut self) -> (String, ServiceResult) {
if self.running_services.is_empty() {
if self.service_handles.is_empty() {
return (
"none".to_string(),
ServiceResult::Error(anyhow::anyhow!("No services to run")),
@@ -71,99 +87,134 @@ impl ServiceManager {
info!(
"servicemanager running {} services",
self.running_services.len()
self.service_handles.len()
);
// Wait for any service to complete
loop {
let mut completed_services = Vec::new();
// Wait for any service to complete via the channel
let completion_rx = self
.completion_rx
.as_mut()
.expect("completion_rx should be available");
for (name, handle) in &mut self.running_services {
if handle.is_finished() {
completed_services.push(name.clone());
}
}
if let Some(completed_name) = completed_services.first() {
let handle = self.running_services.remove(completed_name).unwrap();
match handle.await {
Ok(result) => {
return (completed_name.clone(), result);
}
Err(e) => {
error!(service = completed_name, "service task panicked: {e}");
return (
completed_name.clone(),
ServiceResult::Error(anyhow::anyhow!("Task panic: {e}")),
);
}
}
}
// Small delay to prevent busy-waiting
tokio::time::sleep(Duration::from_millis(10)).await;
}
completion_rx
.recv()
.await
.map(|(name, result)| {
self.service_handles.remove(&name);
(name, result)
})
.unwrap_or_else(|| {
(
"channel_closed".to_string(),
ServiceResult::Error(anyhow::anyhow!("Completion channel closed")),
)
})
}
/// Shutdown all services gracefully with a timeout.
///
/// If any service fails to shutdown, it will return an error containing the names of the services that failed to shutdown.
/// If all services shutdown successfully, the function will return the duration elapsed.
/// All services receive the shutdown signal simultaneously and shut down in parallel.
/// Each service gets the full timeout duration (they don't share/consume from a budget).
/// If any service fails to shutdown within the timeout, it will be aborted.
///
/// Returns the elapsed time if all succeed, or a list of failed service names.
pub async fn shutdown(&mut self, timeout: Duration) -> Result<Duration, Vec<String>> {
let service_count = self.running_services.len();
let service_names: Vec<_> = self.running_services.keys().cloned().collect();
let service_count = self.service_handles.len();
let service_names: Vec<_> = self.service_handles.keys().cloned().collect();
info!(
service_count,
services = ?service_names,
timeout = format!("{:.2?}", timeout),
"shutting down {} services with {:?} timeout",
"shutting down {} services in parallel with {:?} timeout each",
service_count,
timeout
);
// Send shutdown signal to all services
if service_count == 0 {
return Ok(Duration::ZERO);
}
// Send shutdown signal to all services simultaneously
let _ = self.shutdown_tx.send(());
// Wait for all services to complete
let start_time = std::time::Instant::now();
let mut pending_services = Vec::new();
for (name, handle) in self.running_services.drain() {
match tokio::time::timeout(timeout, handle).await {
Ok(Ok(_)) => {
trace!(service = name, "service shutdown completed");
// Collect results from all services with timeout
let completion_rx = self
.completion_rx
.as_mut()
.expect("completion_rx should be available");
// Collect all completion results with a single timeout
let collect_future = async {
let mut collected: Vec<Option<(String, ServiceResult)>> = Vec::new();
for _ in 0..service_count {
if let Some(result) = completion_rx.recv().await {
collected.push(Some(result));
} else {
collected.push(None);
}
Ok(Err(e)) => {
warn!(service = name, error = ?e, "service shutdown failed");
pending_services.push(name);
}
Err(_) => {
warn!(service = name, "service shutdown timed out");
pending_services.push(name);
}
collected
};
let results = match tokio::time::timeout(timeout, collect_future).await {
Ok(results) => results,
Err(_) => {
// Timeout exceeded - abort all remaining services
warn!(
timeout = format!("{:.2?}", timeout),
"shutdown timeout exceeded - aborting all remaining services"
);
let failed: Vec<String> = self.service_handles.keys().cloned().collect();
for handle in self.service_handles.values() {
handle.abort();
}
self.service_handles.clear();
return Err(failed);
}
};
// Process results and identify failures
let mut failed_services = Vec::new();
for (name, service_result) in results.into_iter().flatten() {
self.service_handles.remove(&name);
if matches!(service_result, ServiceResult::GracefulShutdown) {
trace!(service = name, "service shutdown completed");
} else {
warn!(
service = name,
result = ?service_result,
"service shutdown with non-graceful result"
);
failed_services.push(name);
}
}
let elapsed = start_time.elapsed();
if pending_services.is_empty() {
if failed_services.is_empty() {
info!(
service_count,
elapsed = format!("{:.2?}", elapsed),
"services shutdown completed: {}",
"all services shutdown successfully: {}",
service_names.join(", ")
);
Ok(elapsed)
} else {
warn!(
pending_count = pending_services.len(),
pending_services = ?pending_services,
failed_count = failed_services.len(),
failed_services = ?failed_services,
elapsed = format!("{:.2?}", elapsed),
"services shutdown completed with {} pending: {}",
pending_services.len(),
pending_services.join(", ")
"{} service(s) failed to shutdown gracefully: {}",
failed_services.len(),
failed_services.join(", ")
);
Err(pending_services)
Err(failed_services)
}
}
}

View File

@@ -23,7 +23,11 @@ pub trait Service: Send + Sync {
/// Gracefully shutdown the service
///
/// An 'Ok' result does not mean the service has completed shutdown, it merely means that the service shutdown was initiated.
/// Implementations should initiate shutdown and MAY wait for completion.
/// Services are expected to respond to this call and begin cleanup promptly.
/// When managed by ServiceManager, the configured timeout (default 8s) applies to
/// ALL services combined, not per-service. Services should complete shutdown as
/// quickly as possible to avoid timeout.
async fn shutdown(&mut self) -> Result<(), anyhow::Error>;
}

View File

@@ -3,7 +3,7 @@ use crate::web::{BannerState, create_router};
use std::net::SocketAddr;
use tokio::net::TcpListener;
use tokio::sync::broadcast;
use tracing::{info, warn, trace};
use tracing::{info, trace, warn};
/// Web server service implementation
pub struct WebService {
@@ -33,16 +33,12 @@ impl Service for WebService {
let app = create_router(self.banner_state.clone());
let addr = SocketAddr::from(([0, 0, 0, 0], self.port));
info!(
service = "web",
link = format!("http://localhost:{}", addr.port()),
"starting web server",
);
let listener = TcpListener::bind(addr).await?;
info!(
service = "web",
address = %addr,
link = format!("http://localhost:{}", addr.port()),
"web server listening"
);
@@ -61,13 +57,16 @@ impl Service for WebService {
})
.await?;
trace!(service = "web", "graceful shutdown completed");
info!(service = "web", "web server stopped");
Ok(())
}
async fn shutdown(&mut self) -> Result<(), anyhow::Error> {
if let Some(shutdown_tx) = self.shutdown_tx.take() {
let _ = shutdown_tx.send(());
trace!(service = "web", "sent shutdown signal to axum");
} else {
warn!(
service = "web",

106
src/signals.rs Normal file
View File

@@ -0,0 +1,106 @@
use crate::services::ServiceResult;
use crate::services::manager::ServiceManager;
use std::process::ExitCode;
use std::time::Duration;
use tokio::signal;
use tracing::{error, info, warn};
/// Handle application shutdown signals and graceful shutdown
pub async fn handle_shutdown_signals(
mut service_manager: ServiceManager,
shutdown_timeout: Duration,
) -> ExitCode {
// Set up signal handling for both SIGINT (Ctrl+C) and SIGTERM
let ctrl_c = async {
signal::ctrl_c()
.await
.expect("Failed to install CTRL+C signal handler");
info!("received ctrl+c, gracefully shutting down...");
};
#[cfg(unix)]
let sigterm = async {
use tokio::signal::unix::{SignalKind, signal};
let mut sigterm_stream =
signal(SignalKind::terminate()).expect("Failed to install SIGTERM signal handler");
sigterm_stream.recv().await;
info!("received SIGTERM, gracefully shutting down...");
};
#[cfg(not(unix))]
let sigterm = async {
// On non-Unix systems, create a future that never completes
// This ensures the select! macro works correctly
std::future::pending::<()>().await;
};
// Main application loop - wait for services or signals
let mut exit_code = ExitCode::SUCCESS;
tokio::select! {
(service_name, result) = service_manager.run() => {
// A service completed unexpectedly
match result {
ServiceResult::GracefulShutdown => {
info!(service = service_name, "service completed gracefully");
}
ServiceResult::NormalCompletion => {
warn!(service = service_name, "service completed unexpectedly");
exit_code = ExitCode::FAILURE;
}
ServiceResult::Error(e) => {
error!(service = service_name, error = ?e, "service failed");
exit_code = ExitCode::FAILURE;
}
}
// Shutdown remaining services
exit_code = handle_graceful_shutdown(service_manager, shutdown_timeout, exit_code).await;
}
_ = ctrl_c => {
// User requested shutdown via Ctrl+C
info!("user requested shutdown via ctrl+c");
exit_code = handle_graceful_shutdown(service_manager, shutdown_timeout, ExitCode::SUCCESS).await;
}
_ = sigterm => {
// System requested shutdown via SIGTERM
info!("system requested shutdown via SIGTERM");
exit_code = handle_graceful_shutdown(service_manager, shutdown_timeout, ExitCode::SUCCESS).await;
}
}
info!(exit_code = ?exit_code, "application shutdown complete");
exit_code
}
/// Handle graceful shutdown of remaining services
async fn handle_graceful_shutdown(
mut service_manager: ServiceManager,
shutdown_timeout: Duration,
current_exit_code: ExitCode,
) -> ExitCode {
match service_manager.shutdown(shutdown_timeout).await {
Ok(elapsed) => {
info!(
remaining = format!("{:.2?}", shutdown_timeout - elapsed),
"graceful shutdown complete"
);
current_exit_code
}
Err(pending_services) => {
warn!(
pending_count = pending_services.len(),
pending_services = ?pending_services,
"graceful shutdown elapsed - {} service(s) did not complete",
pending_services.len()
);
// Non-zero exit code, default to FAILURE if not set
if current_exit_code == ExitCode::SUCCESS {
ExitCode::FAILURE
} else {
current_exit_code
}
}
}
}

1
src/utils/mod.rs Normal file
View File

@@ -0,0 +1 @@
pub mod shutdown;

32
src/utils/shutdown.rs Normal file
View File

@@ -0,0 +1,32 @@
use tokio::task::JoinHandle;
use tracing::warn;
/// Helper for joining multiple task handles with proper error handling.
///
/// This function waits for all tasks to complete and reports any that panicked.
/// Returns an error if any task panicked, otherwise returns Ok.
pub async fn join_tasks(handles: Vec<JoinHandle<()>>) -> Result<(), anyhow::Error> {
let results = futures::future::join_all(handles).await;
let failed = results.iter().filter(|r| r.is_err()).count();
if failed > 0 {
warn!(failed_count = failed, "Some tasks panicked during shutdown");
Err(anyhow::anyhow!("{} task(s) panicked", failed))
} else {
Ok(())
}
}
/// Helper for joining multiple task handles with a timeout.
///
/// Waits for all tasks to complete within the specified timeout.
/// If timeout occurs, remaining tasks are aborted.
pub async fn join_tasks_with_timeout(
handles: Vec<JoinHandle<()>>,
timeout: std::time::Duration,
) -> Result<(), anyhow::Error> {
match tokio::time::timeout(timeout, join_tasks(handles)).await {
Ok(result) => result,
Err(_) => Err(anyhow::anyhow!("Task join timed out after {:?}", timeout)),
}
}

View File

@@ -2,24 +2,26 @@
use axum::{
Router,
body::Body,
extract::{Request, State},
http::{HeaderMap, HeaderValue, StatusCode, Uri},
response::{Html, IntoResponse, Json, Response},
routing::get,
};
use http::header;
use serde::Serialize;
use serde_json::{Value, json};
use std::sync::Arc;
use std::{collections::BTreeMap, time::Duration};
use tower_http::timeout::TimeoutLayer;
use tower_http::{
classify::ServerErrorsFailureClass,
cors::{Any, CorsLayer},
trace::TraceLayer,
};
use tracing::info;
use tracing::{Span, debug, info, warn};
use crate::web::assets::{WebAssets, get_asset_metadata_cached};
use crate::banner::BannerApi;
/// Set appropriate caching headers based on asset type
fn set_caching_headers(response: &mut Response, path: &str, etag: &str) {
let headers = response.headers_mut();
@@ -58,9 +60,7 @@ fn set_caching_headers(response: &mut Response, path: &str, etag: &str) {
/// Shared application state for web server
#[derive(Clone)]
pub struct BannerState {
pub api: Arc<BannerApi>,
}
pub struct BannerState {}
/// Creates the web server router
pub fn create_router(state: BannerState) -> Router {
@@ -70,47 +70,64 @@ pub fn create_router(state: BannerState) -> Router {
.route("/metrics", get(metrics))
.with_state(state);
if cfg!(debug_assertions) {
// Development mode: API routes only, frontend served by Vite dev server
Router::new()
.route("/", get(root))
.nest("/api", api_router)
.layer(
CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any),
)
.layer(TraceLayer::new_for_http())
} else {
// Production mode: serve embedded assets and handle SPA routing
Router::new()
.route("/", get(root))
.nest("/api", api_router)
.fallback(fallback)
.layer(TraceLayer::new_for_http())
}
}
let mut router = Router::new().nest("/api", api_router);
async fn root() -> Response {
if cfg!(debug_assertions) {
// Development mode: return API info
Json(json!({
"message": "Banner Discord Bot API",
"version": "0.2.1",
"mode": "development",
"frontend": "http://localhost:3000",
"endpoints": {
"health": "/api/health",
"status": "/api/status",
"metrics": "/api/metrics"
}
}))
.into_response()
router = router.layer(
CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any),
)
} else {
// Production mode: serve the SPA index.html
handle_spa_fallback_with_headers(Uri::from_static("/"), HeaderMap::new()).await
router = router.fallback(fallback);
}
router.layer((
TraceLayer::new_for_http()
.make_span_with(|request: &Request<Body>| {
tracing::debug_span!("request", path = request.uri().path())
})
.on_request(())
.on_body_chunk(())
.on_eos(())
.on_response(
|response: &Response<Body>, latency: Duration, _span: &Span| {
let latency_threshold = if cfg!(debug_assertions) {
Duration::from_millis(100)
} else {
Duration::from_millis(1000)
};
// Format latency, status, and code
let (latency_str, status) = (
format!("{latency:.2?}"),
format!(
"{} {}",
response.status().as_u16(),
response.status().canonical_reason().unwrap_or("??")
),
);
// Log in warn if latency is above threshold, otherwise debug
if latency > latency_threshold {
warn!(latency = latency_str, status = status, "Response");
} else {
debug!(latency = latency_str, status = status, "Response");
}
},
)
.on_failure(
|error: ServerErrorsFailureClass, latency: Duration, _span: &Span| {
warn!(
error = ?error,
latency = format!("{latency:.2?}"),
"Request failed"
);
},
),
TimeoutLayer::new(Duration::from_secs(10)),
))
}
/// Handler that extracts request information for caching
@@ -193,25 +210,78 @@ async fn health() -> Json<Value> {
}))
}
#[derive(Serialize)]
enum Status {
Disabled,
Connected,
Active,
Healthy,
Error,
}
#[derive(Serialize)]
struct ServiceInfo {
name: String,
status: Status,
}
#[derive(Serialize)]
struct StatusResponse {
status: Status,
version: String,
commit: String,
services: BTreeMap<String, ServiceInfo>,
}
/// Status endpoint showing bot and system status
async fn status(State(_state): State<BannerState>) -> Json<Value> {
// For now, return basic status without accessing private fields
Json(json!({
"status": "operational",
"bot": {
"status": "running",
"uptime": "TODO: implement uptime tracking"
async fn status(State(_state): State<BannerState>) -> Json<StatusResponse> {
let mut services = BTreeMap::new();
// Bot service status - hardcoded as disabled for now
services.insert(
"bot".to_string(),
ServiceInfo {
name: "Bot".to_string(),
status: Status::Disabled,
},
"cache": {
"status": "connected",
"courses": "TODO: implement course counting",
"subjects": "TODO: implement subject counting"
);
// Banner API status - always connected for now
services.insert(
"banner".to_string(),
ServiceInfo {
name: "Banner".to_string(),
status: Status::Connected,
},
"banner_api": {
"status": "connected"
);
// Discord status - hardcoded as disabled for now
services.insert(
"discord".to_string(),
ServiceInfo {
name: "Discord".to_string(),
status: Status::Disabled,
},
"timestamp": chrono::Utc::now().to_rfc3339()
}))
);
let overall_status = if services.values().any(|s| matches!(s.status, Status::Error)) {
Status::Error
} else if services
.values()
.all(|s| matches!(s.status, Status::Active | Status::Connected))
{
Status::Active
} else {
// If we have any Disabled services but no errors, show as Healthy
Status::Healthy
};
Json(StatusResponse {
status: overall_status,
version: env!("CARGO_PKG_VERSION").to_string(),
commit: env!("GIT_COMMIT_HASH").to_string(),
services,
})
}
/// Metrics endpoint for monitoring

39
tests/basic_test.rs Normal file
View File

@@ -0,0 +1,39 @@
use banner::utils::shutdown::join_tasks;
use tokio::task::JoinHandle;
#[tokio::test]
async fn test_join_tasks_success() {
// Create some tasks that complete successfully
let handles: Vec<JoinHandle<()>> = vec![
tokio::spawn(async { tokio::time::sleep(tokio::time::Duration::from_millis(10)).await }),
tokio::spawn(async { tokio::time::sleep(tokio::time::Duration::from_millis(20)).await }),
tokio::spawn(async { /* immediate completion */ }),
];
// All tasks should complete successfully
let result = join_tasks(handles).await;
assert!(
result.is_ok(),
"Expected all tasks to complete successfully"
);
}
#[tokio::test]
async fn test_join_tasks_with_panic() {
// Create some tasks, including one that panics
let handles: Vec<JoinHandle<()>> = vec![
tokio::spawn(async { tokio::time::sleep(tokio::time::Duration::from_millis(10)).await }),
tokio::spawn(async { panic!("intentional test panic") }),
tokio::spawn(async { /* immediate completion */ }),
];
// Should return an error because one task panicked
let result = join_tasks(handles).await;
assert!(result.is_err(), "Expected an error when a task panics");
let error_msg = result.unwrap_err().to_string();
assert!(
error_msg.contains("1 task(s) panicked"),
"Error message should mention panicked tasks"
);
}

30
web/biome.json Normal file
View File

@@ -0,0 +1,30 @@
{
"$schema": "https://biomejs.dev/schemas/1.9.4/schema.json",
"vcs": {
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true
},
"files": {
"ignoreUnknown": false,
"ignore": ["dist/", "node_modules/", ".tanstack/"]
},
"formatter": {
"enabled": true,
"indentStyle": "space",
"indentWidth": 2,
"lineWidth": 100,
"lineEnding": "lf"
},
"javascript": {
"formatter": {
"quoteStyle": "double",
"trailingCommas": "es5",
"semicolons": "always",
"arrowParentheses": "always"
}
},
"linter": {
"enabled": false
}
}

1297
web/bun.lock Normal file
View File

File diff suppressed because it is too large Load Diff

60
web/eslint.config.js Normal file
View File

@@ -0,0 +1,60 @@
import js from "@eslint/js";
import tseslint from "typescript-eslint";
import react from "eslint-plugin-react";
import reactHooks from "eslint-plugin-react-hooks";
import reactRefresh from "eslint-plugin-react-refresh";
export default tseslint.config(
// Ignore generated files and build outputs
{
ignores: ["dist", "node_modules", "src/routeTree.gen.ts", "*.config.js"],
},
// Base configs
js.configs.recommended,
...tseslint.configs.recommendedTypeChecked,
// React plugin configuration
{
files: ["**/*.{ts,tsx}"],
plugins: {
react,
"react-hooks": reactHooks,
"react-refresh": reactRefresh,
},
languageOptions: {
parserOptions: {
project: true,
tsconfigRootDir: import.meta.dirname,
ecmaFeatures: {
jsx: true,
},
},
},
settings: {
react: {
version: "19.0",
},
},
rules: {
// React rules
...react.configs.recommended.rules,
...react.configs["jsx-runtime"].rules,
...reactHooks.configs.recommended.rules,
// React Refresh
"react-refresh/only-export-components": ["warn", { allowConstantExport: true }],
// TypeScript overrides
"@typescript-eslint/no-unused-vars": [
"error",
{
argsIgnorePattern: "^_",
varsIgnorePattern: "^_",
},
],
"@typescript-eslint/no-explicit-any": "warn",
// Disable prop-types since we're using TypeScript
"react/prop-types": "off",
},
}
);

View File

@@ -1,4 +1,4 @@
<!DOCTYPE html>
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
@@ -7,11 +7,11 @@
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-tsrouter-app"
content="Banner, a Discord bot and web interface for UTSA Course Monitoring"
/>
<link rel="apple-touch-icon" href="/logo192.png" />
<link rel="manifest" href="/manifest.json" />
<title>Create TanStack App - web-template</title>
<title>Banner</title>
</head>
<body>
<div id="app"></div>

View File

@@ -1,5 +1,5 @@
{
"name": "web-template",
"name": "banner-web",
"private": true,
"type": "module",
"scripts": {
@@ -7,7 +7,11 @@
"start": "vite --port 3000",
"build": "vite build && tsc",
"serve": "vite preview",
"test": "vitest run"
"test": "vitest run",
"lint": "tsc && eslint . --ext .ts,.tsx",
"typecheck": "tsc --noEmit",
"format": "biome format --write .",
"format:check": "biome format ."
},
"dependencies": {
"@radix-ui/themes": "^3.2.1",
@@ -16,21 +20,30 @@
"@tanstack/react-router-devtools": "^1.131.5",
"@tanstack/router-plugin": "^1.121.2",
"lucide-react": "^0.544.0",
"next-themes": "^0.4.6",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"react-timeago": "^8.3.0",
"recharts": "^3.2.0"
},
"devDependencies": {
"@biomejs/biome": "^1.9.4",
"@eslint/js": "^9.39.0",
"@testing-library/dom": "^10.4.0",
"@testing-library/react": "^16.2.0",
"@types/node": "^24.3.3",
"@types/react": "^19.0.8",
"@types/react-dom": "^19.0.3",
"@vitejs/plugin-react": "^4.3.4",
"eslint": "^9.39.0",
"eslint-plugin-react": "^7.37.5",
"eslint-plugin-react-hooks": "^7.0.1",
"eslint-plugin-react-refresh": "^0.4.24",
"jsdom": "^26.0.0",
"typescript": "^5.7.2",
"typescript-eslint": "^8.46.2",
"vite": "^6.3.5",
"vitest": "^3.0.5",
"web-vitals": "^4.2.4"
}
}
}

4594
web/pnpm-lock.yaml generated
View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"short_name": "TanStack App",
"name": "Create TanStack App Sample",
"short_name": "Banner",
"name": "Banner, a Discord bot and web interface for UTSA Course Monitoring",
"icons": [
{
"src": "favicon.ico",
@@ -20,6 +20,6 @@
],
"start_url": ".",
"display": "standalone",
"theme_color": "#000000",
"theme_color": "#ffffff",
"background_color": "#ffffff"
}

View File

@@ -1,38 +1,34 @@
.App {
text-align: center;
}
.App-logo {
height: 40vmin;
pointer-events: none;
}
@media (prefers-reduced-motion: no-preference) {
.App-logo {
animation: App-logo-spin infinite 20s linear;
}
}
.App-header {
background-color: #282c34;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
color: white;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu",
"Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif;
background-color: var(--color-background);
color: var(--color-text);
}
.App-link {
color: #61dafb;
@keyframes pulse {
0%,
100% {
opacity: 0.2;
}
50% {
opacity: 0.4;
}
}
@keyframes App-logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
.animate-pulse {
animation: pulse 2s ease-in-out infinite;
}
/* Screen reader only text */
.sr-only {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
margin: -1px;
overflow: hidden;
clip: rect(0, 0, 0, 0);
white-space: nowrap;
border: 0;
}

View File

@@ -0,0 +1,60 @@
import { useTheme } from "next-themes";
import { Button } from "@radix-ui/themes";
import { Sun, Moon, Monitor } from "lucide-react";
import { useMemo } from "react";
export function ThemeToggle() {
const { theme, setTheme } = useTheme();
const nextTheme = useMemo(() => {
switch (theme) {
case "light":
return "dark";
case "dark":
return "system";
case "system":
return "light";
default:
console.error(`Invalid theme: ${theme}`);
return "system";
}
}, [theme]);
const icon = useMemo(() => {
if (nextTheme === "system") {
return <Monitor size={18} />;
}
return nextTheme === "dark" ? <Moon size={18} /> : <Sun size={18} />;
}, [nextTheme]);
return (
<Button
variant="ghost"
size="3"
onClick={() => setTheme(nextTheme)}
style={{
cursor: "pointer",
backgroundColor: "transparent",
border: "none",
margin: "4px",
padding: "7px",
borderRadius: "6px",
display: "flex",
alignItems: "center",
justifyContent: "center",
color: "var(--gray-11)",
transition: "background-color 0.2s, color 0.2s",
transform: "scale(1.25)",
}}
onMouseEnter={(e) => {
e.currentTarget.style.backgroundColor = "var(--gray-4)";
}}
onMouseLeave={(e) => {
e.currentTarget.style.backgroundColor = "transparent";
}}
>
{icon}
<span className="sr-only">Toggle theme</span>
</Button>
);
}

View File

@@ -6,21 +6,18 @@ export interface HealthResponse {
timestamp: string;
}
export type Status = "Disabled" | "Connected" | "Active" | "Healthy" | "Error";
export interface ServiceInfo {
name: string;
status: Status;
}
export interface StatusResponse {
status: string;
bot: {
status: string;
uptime: string;
};
cache: {
status: string;
courses: string;
subjects: string;
};
banner_api: {
status: string;
};
timestamp: string;
status: Status;
version: string;
commit: string;
services: Record<string, ServiceInfo>;
}
export interface MetricsResponse {
@@ -41,12 +38,10 @@ export class BannerApiClient {
const response = await fetch(`${this.baseUrl}${endpoint}`);
if (!response.ok) {
throw new Error(
`API request failed: ${response.status} ${response.statusText}`
);
throw new Error(`API request failed: ${response.status} ${response.statusText}`);
}
return response.json();
return (await response.json()) as T;
}
async getHealth(): Promise<HealthResponse> {
@@ -63,4 +58,4 @@ export class BannerApiClient {
}
// Export a default instance
export const apiClient = new BannerApiClient();
export const client = new BannerApiClient();

View File

@@ -1,42 +1,53 @@
import { StrictMode } from 'react'
import ReactDOM from 'react-dom/client'
import { RouterProvider, createRouter } from '@tanstack/react-router'
import { StrictMode } from "react";
import ReactDOM from "react-dom/client";
import { RouterProvider, createRouter } from "@tanstack/react-router";
import { ThemeProvider } from "next-themes";
import { Theme } from "@radix-ui/themes";
// Import the generated route tree
import { routeTree } from './routeTree.gen'
import { routeTree } from "./routeTree.gen";
import './styles.css'
import reportWebVitals from './reportWebVitals.ts'
import "./styles.css";
import reportWebVitals from "./reportWebVitals.ts";
// Create a new router instance
const router = createRouter({
routeTree,
context: {},
defaultPreload: 'intent',
defaultPreload: "intent",
scrollRestoration: true,
defaultStructuralSharing: true,
defaultPreloadStaleTime: 0,
})
});
// Register the router instance for type safety
declare module '@tanstack/react-router' {
declare module "@tanstack/react-router" {
interface Register {
router: typeof router
router: typeof router;
}
}
// Render the app
const rootElement = document.getElementById('app')
const rootElement = document.getElementById("app");
if (rootElement && !rootElement.innerHTML) {
const root = ReactDOM.createRoot(rootElement)
const root = ReactDOM.createRoot(rootElement);
root.render(
<StrictMode>
<RouterProvider router={router} />
</StrictMode>,
)
<ThemeProvider
attribute="class"
defaultTheme="system"
enableSystem
disableTransitionOnChange={false}
>
<Theme>
<RouterProvider router={router} />
</Theme>
</ThemeProvider>
</StrictMode>
);
}
// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
reportWebVitals()
reportWebVitals();

View File

@@ -1,13 +1,13 @@
const reportWebVitals = (onPerfEntry?: () => void) => {
if (onPerfEntry && onPerfEntry instanceof Function) {
import('web-vitals').then(({ onCLS, onINP, onFCP, onLCP, onTTFB }) => {
onCLS(onPerfEntry)
onINP(onPerfEntry)
onFCP(onPerfEntry)
onLCP(onPerfEntry)
onTTFB(onPerfEntry)
})
void import("web-vitals").then(({ onCLS, onINP, onFCP, onLCP, onTTFB }) => {
onCLS(onPerfEntry);
onINP(onPerfEntry);
onFCP(onPerfEntry);
onLCP(onPerfEntry);
onTTFB(onPerfEntry);
});
}
}
};
export default reportWebVitals
export default reportWebVitals;

View File

@@ -8,52 +8,52 @@
// You should NOT make any changes in this file as it will be overwritten.
// Additionally, you should also exclude this file from your linter and/or formatter to prevent it from being checked or modified.
import { Route as rootRouteImport } from './routes/__root'
import { Route as IndexRouteImport } from './routes/index'
import { Route as rootRouteImport } from "./routes/__root";
import { Route as IndexRouteImport } from "./routes/index";
const IndexRoute = IndexRouteImport.update({
id: '/',
path: '/',
id: "/",
path: "/",
getParentRoute: () => rootRouteImport,
} as any)
} as any);
export interface FileRoutesByFullPath {
'/': typeof IndexRoute
"/": typeof IndexRoute;
}
export interface FileRoutesByTo {
'/': typeof IndexRoute
"/": typeof IndexRoute;
}
export interface FileRoutesById {
__root__: typeof rootRouteImport
'/': typeof IndexRoute
__root__: typeof rootRouteImport;
"/": typeof IndexRoute;
}
export interface FileRouteTypes {
fileRoutesByFullPath: FileRoutesByFullPath
fullPaths: '/'
fileRoutesByTo: FileRoutesByTo
to: '/'
id: '__root__' | '/'
fileRoutesById: FileRoutesById
fileRoutesByFullPath: FileRoutesByFullPath;
fullPaths: "/";
fileRoutesByTo: FileRoutesByTo;
to: "/";
id: "__root__" | "/";
fileRoutesById: FileRoutesById;
}
export interface RootRouteChildren {
IndexRoute: typeof IndexRoute
IndexRoute: typeof IndexRoute;
}
declare module '@tanstack/react-router' {
declare module "@tanstack/react-router" {
interface FileRoutesByPath {
'/': {
id: '/'
path: '/'
fullPath: '/'
preLoaderRoute: typeof IndexRouteImport
parentRoute: typeof rootRouteImport
}
"/": {
id: "/";
path: "/";
fullPath: "/";
preLoaderRoute: typeof IndexRouteImport;
parentRoute: typeof rootRouteImport;
};
}
}
const rootRouteChildren: RootRouteChildren = {
IndexRoute: IndexRoute,
}
};
export const routeTree = rootRouteImport
._addFileChildren(rootRouteChildren)
._addFileTypes<FileRouteTypes>()
._addFileTypes<FileRouteTypes>();

View File

@@ -1,22 +1,34 @@
import { Outlet, createRootRoute } from '@tanstack/react-router'
import { TanStackRouterDevtoolsPanel } from '@tanstack/react-router-devtools'
import { TanstackDevtools } from '@tanstack/react-devtools'
import { Outlet, createRootRoute } from "@tanstack/react-router";
import { TanStackRouterDevtoolsPanel } from "@tanstack/react-router-devtools";
import { TanstackDevtools } from "@tanstack/react-devtools";
import { Theme } from "@radix-ui/themes";
import "@radix-ui/themes/styles.css";
import { ThemeProvider } from "next-themes";
export const Route = createRootRoute({
component: () => (
<>
<Outlet />
<TanstackDevtools
config={{
position: 'bottom-left',
}}
plugins={[
{
name: 'Tanstack Router',
render: <TanStackRouterDevtoolsPanel />,
},
]}
/>
</>
<ThemeProvider
attribute="class"
defaultTheme="system"
enableSystem
disableTransitionOnChange={false}
>
<Theme accentColor="blue" grayColor="gray">
<Outlet />
{import.meta.env.DEV ? (
<TanstackDevtools
config={{
position: "bottom-left",
}}
plugins={[
{
name: "Tanstack Router",
render: <TanStackRouterDevtoolsPanel />,
},
]}
/>
) : null}
</Theme>
</ThemeProvider>
),
})
});

View File

@@ -1,87 +1,399 @@
import { createFileRoute } from "@tanstack/react-router";
import { useState, useEffect } from "react";
import { client, type StatusResponse, type Status } from "../lib/api";
import { Card, Flex, Text, Tooltip, Skeleton } from "@radix-ui/themes";
import {
apiClient,
type HealthResponse,
type StatusResponse,
} from "../lib/api";
import logo from "../logo.svg";
CheckCircle,
XCircle,
Clock,
Bot,
Globe,
Hourglass,
Activity,
MessageCircle,
Circle,
WifiOff,
} from "lucide-react";
import TimeAgo from "react-timeago";
import { ThemeToggle } from "../components/ThemeToggle";
import "../App.css";
const REFRESH_INTERVAL = import.meta.env.DEV ? 3000 : 30000;
const REQUEST_TIMEOUT = 10000; // 10 seconds
const CARD_STYLES = {
padding: "24px",
maxWidth: "400px",
width: "100%",
} as const;
const BORDER_STYLES = {
marginTop: "16px",
paddingTop: "16px",
borderTop: "1px solid var(--gray-7)",
} as const;
const SERVICE_ICONS: Record<string, typeof Bot> = {
bot: Bot,
banner: Globe,
discord: MessageCircle,
};
interface ResponseTiming {
health: number | null;
status: number | null;
}
interface StatusIcon {
icon: typeof CheckCircle;
color: string;
}
interface Service {
name: string;
status: Status;
icon: typeof Bot;
}
type StatusState =
| {
mode: "loading";
}
| {
mode: "response";
timing: ResponseTiming;
lastFetch: Date;
status: StatusResponse;
}
| {
mode: "error";
lastFetch: Date;
}
| {
mode: "timeout";
lastFetch: Date;
};
const formatNumber = (num: number): string => {
return num.toLocaleString();
};
const getStatusIcon = (status: Status | "Unreachable"): StatusIcon => {
const statusMap: Record<Status | "Unreachable", StatusIcon> = {
Active: { icon: CheckCircle, color: "green" },
Connected: { icon: CheckCircle, color: "green" },
Healthy: { icon: CheckCircle, color: "green" },
Disabled: { icon: Circle, color: "gray" },
Error: { icon: XCircle, color: "red" },
Unreachable: { icon: WifiOff, color: "red" },
};
return statusMap[status];
};
const getOverallHealth = (state: StatusState): Status | "Unreachable" => {
if (state.mode === "timeout") return "Unreachable";
if (state.mode === "error") return "Error";
if (state.mode === "response") return state.status.status;
return "Error";
};
const getServices = (state: StatusState): Service[] => {
if (state.mode !== "response") return [];
return Object.entries(state.status.services).map(([serviceId, serviceInfo]) => ({
name: serviceInfo.name,
status: serviceInfo.status,
icon: SERVICE_ICONS[serviceId] || SERVICE_ICONS.default,
}));
};
const StatusDisplay = ({ status }: { status: Status | "Unreachable" }) => {
const { icon: Icon, color } = getStatusIcon(status);
return (
<Flex align="center" gap="2">
<Text
size="2"
style={{
color: status === "Disabled" ? "var(--gray-11)" : undefined,
opacity: status === "Disabled" ? 0.7 : undefined,
}}
>
{status}
</Text>
<Icon color={color} size={16} />
</Flex>
);
};
const ServiceStatus = ({ service }: { service: Service }) => {
return (
<Flex align="center" justify="between">
<Flex align="center" gap="2">
<service.icon size={18} />
<Text style={{ color: "var(--gray-11)" }}>{service.name}</Text>
</Flex>
<StatusDisplay status={service.status} />
</Flex>
);
};
const SkeletonService = () => {
return (
<Flex align="center" justify="between">
<Flex align="center" gap="2">
<Skeleton height="24px" width="18px" />
<Skeleton height="24px" width="60px" />
</Flex>
<Flex align="center" gap="2">
<Skeleton height="20px" width="50px" />
<Skeleton height="20px" width="16px" />
</Flex>
</Flex>
);
};
const TimingRow = ({
icon: Icon,
name,
children,
}: {
icon: React.ComponentType<{ size?: number }>;
name: string;
children: React.ReactNode;
}) => (
<Flex align="center" justify="between">
<Flex align="center" gap="2">
<Icon size={13} />
<Text size="2" color="gray">
{name}
</Text>
</Flex>
{children}
</Flex>
);
function App() {
const [state, setState] = useState<StatusState>({ mode: "loading" });
// State helpers
const isLoading = state.mode === "loading";
const hasError = state.mode === "error";
const hasTimeout = state.mode === "timeout";
const hasResponse = state.mode === "response";
const shouldShowSkeleton = isLoading || hasError;
const shouldShowTiming = hasResponse && state.timing.health !== null;
const shouldShowLastFetch = hasResponse || hasError || hasTimeout;
useEffect(() => {
let timeoutId: NodeJS.Timeout;
const fetchData = async () => {
try {
const startTime = Date.now();
// Create a timeout promise
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error("Request timeout")), REQUEST_TIMEOUT);
});
// Race between the API call and timeout
const statusData = await Promise.race([client.getStatus(), timeoutPromise]);
const endTime = Date.now();
const responseTime = endTime - startTime;
setState({
mode: "response",
status: statusData,
timing: { health: responseTime, status: responseTime },
lastFetch: new Date(),
});
} catch (err) {
const errorMessage = err instanceof Error ? err.message : "Failed to fetch data";
// Check if it's a timeout error
if (errorMessage === "Request timeout") {
setState({
mode: "timeout",
lastFetch: new Date(),
});
} else {
setState({
mode: "error",
lastFetch: new Date(),
});
}
}
// Schedule the next request after the current one completes
timeoutId = setTimeout(() => void fetchData(), REFRESH_INTERVAL);
};
// Start the first request immediately
void fetchData();
return () => {
if (timeoutId) {
clearTimeout(timeoutId);
}
};
}, []);
const overallHealth = getOverallHealth(state);
const { color: overallColor } = getStatusIcon(overallHealth);
const services = getServices(state);
return (
<div className="App">
<div
style={{
position: "fixed",
top: "20px",
right: "20px",
zIndex: 1000,
}}
>
<ThemeToggle />
</div>
<Flex
direction="column"
align="center"
justify="center"
style={{ minHeight: "100vh", padding: "20px" }}
>
<Card style={CARD_STYLES}>
<Flex direction="column" gap="4">
{/* Overall Status */}
<Flex align="center" justify="between">
<Flex align="center" gap="2">
<Activity
color={isLoading ? undefined : overallColor}
size={18}
className={isLoading ? "animate-pulse" : ""}
style={{
opacity: isLoading ? 0.3 : 1,
transition: "opacity 2s ease-in-out, color 2s ease-in-out",
}}
/>
<Text size="4" style={{ color: "var(--gray-12)" }}>
System Status
</Text>
</Flex>
{isLoading ? (
<Skeleton height="20px" width="80px" />
) : (
<StatusDisplay status={overallHealth} />
)}
</Flex>
{/* Individual Services */}
<Flex direction="column" gap="3" style={{ marginTop: "16px" }}>
{shouldShowSkeleton
? // Show skeleton for 3 services during initial loading only
Array.from({ length: 3 }).map((_, index) => <SkeletonService key={index} />)
: services.map((service) => <ServiceStatus key={service.name} service={service} />)}
</Flex>
<Flex direction="column" gap="2" style={BORDER_STYLES}>
{isLoading ? (
<TimingRow icon={Hourglass} name="Response Time">
<Skeleton height="18px" width="50px" />
</TimingRow>
) : shouldShowTiming ? (
<TimingRow icon={Hourglass} name="Response Time">
<Text size="2" style={{ color: "var(--gray-11)" }}>
{formatNumber(state.timing.health!)}ms
</Text>
</TimingRow>
) : null}
{shouldShowLastFetch ? (
<TimingRow icon={Clock} name="Last Updated">
{isLoading ? (
<Text size="2" style={{ paddingBottom: "2px" }} color="gray">
Loading...
</Text>
) : (
<Tooltip content={`as of ${state.lastFetch.toLocaleTimeString()}`}>
<abbr
style={{
cursor: "pointer",
textDecoration: "underline",
textDecorationStyle: "dotted",
textDecorationColor: "var(--gray-6)",
textUnderlineOffset: "6px",
}}
>
<Text size="2" style={{ color: "var(--gray-11)" }}>
<TimeAgo date={state.lastFetch} />
</Text>
</abbr>
</Tooltip>
)}
</TimingRow>
) : isLoading ? (
<TimingRow icon={Clock} name="Last Updated">
<Text size="2" color="gray">
Loading...
</Text>
</TimingRow>
) : null}
</Flex>
</Flex>
</Card>
<Flex justify="center" style={{ marginTop: "12px" }} gap="2" align="center">
{__APP_VERSION__ && (
<Text
size="1"
style={{
color: "var(--gray-11)",
}}
>
v{__APP_VERSION__}
</Text>
)}
{__APP_VERSION__ && (
<div
style={{
width: "1px",
height: "12px",
backgroundColor: "var(--gray-10)",
opacity: 0.3,
}}
/>
)}
<Text
size="1"
style={{
color: "var(--gray-11)",
textDecoration: "none",
}}
>
<a
href={
hasResponse && state.status.commit
? `https://github.com/Xevion/banner/commit/${state.status.commit}`
: "https://github.com/Xevion/banner"
}
target="_blank"
rel="noopener noreferrer"
style={{
color: "inherit",
textDecoration: "none",
}}
>
GitHub
</a>
</Text>
</Flex>
</Flex>
</div>
);
}
export const Route = createFileRoute("/")({
component: App,
});
function App() {
const [health, setHealth] = useState<HealthResponse | null>(null);
const [status, setStatus] = useState<StatusResponse | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
const fetchData = async () => {
try {
setLoading(true);
const [healthData, statusData] = await Promise.all([
apiClient.getHealth(),
apiClient.getStatus(),
]);
setHealth(healthData);
setStatus(statusData);
setError(null);
} catch (err) {
setError(err instanceof Error ? err.message : "Failed to fetch data");
} finally {
setLoading(false);
}
};
fetchData();
}, []);
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<h1>Banner Discord Bot Dashboard</h1>
{loading && <p>Loading...</p>}
{error && (
<div style={{ color: "red", margin: "20px 0" }}>
<p>Error: {error}</p>
</div>
)}
{health && (
<div style={{ margin: "20px 0", textAlign: "left" }}>
<h3>Health Status</h3>
<p>Status: {health.status}</p>
<p>Timestamp: {new Date(health.timestamp).toLocaleString()}</p>
</div>
)}
{status && (
<div style={{ margin: "20px 0", textAlign: "left" }}>
<h3>System Status</h3>
<p>Overall: {status.status}</p>
<p>Bot: {status.bot.status}</p>
<p>Cache: {status.cache.status}</p>
<p>Banner API: {status.banner_api.status}</p>
</div>
)}
<div style={{ marginTop: "40px" }}>
<a
className="App-link"
href="https://tanstack.com"
target="_blank"
rel="noopener noreferrer"
>
Learn TanStack Router
</a>
</div>
</header>
</div>
);
}

View File

@@ -1,14 +1,13 @@
@import "@radix-ui/themes/styles.css";
body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen",
"Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue",
sans-serif;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu",
"Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
code {
font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New",
monospace;
font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New", monospace;
}

3
web/src/vite-env.d.ts vendored Normal file
View File

@@ -0,0 +1,3 @@
/// <reference types="vite/client" />
declare const __APP_VERSION__: string;

View File

@@ -11,6 +11,7 @@
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"isolatedModules": true,
"noEmit": true,
/* Linting */
@@ -22,7 +23,7 @@
"noUncheckedSideEffectImports": true,
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"],
"@/*": ["./src/*"]
}
}
}

View File

@@ -2,6 +2,38 @@ import { defineConfig } from "vite";
import viteReact from "@vitejs/plugin-react";
import tanstackRouter from "@tanstack/router-plugin/vite";
import { resolve } from "node:path";
import { readFileSync, existsSync } from "node:fs";
// Extract version from Cargo.toml
function getVersion() {
const filename = "Cargo.toml";
const paths = [resolve(__dirname, filename), resolve(__dirname, "..", filename)];
for (const path of paths) {
try {
// Check if file exists before reading
if (!existsSync(path)) {
console.log("Skipping ", path, " because it does not exist");
continue;
}
const cargoTomlContent = readFileSync(path, "utf8");
const versionMatch = cargoTomlContent.match(/^version\s*=\s*"([^"]+)"/m);
if (versionMatch) {
console.log("Found version in ", path, ": ", versionMatch[1]);
return versionMatch[1];
}
} catch (error) {
console.warn("Failed to read Cargo.toml at path: ", path, error);
// Continue to next path
}
}
console.warn("Could not read version from Cargo.toml in any location");
return "unknown";
}
const version = getVersion();
// https://vitejs.dev/config/
export default defineConfig({
@@ -29,4 +61,7 @@ export default defineConfig({
outDir: "dist",
sourcemap: true,
},
define: {
__APP_VERSION__: JSON.stringify(version),
},
});