Merge pull request #6 from Xevion/0.2

### Added

- This `CHANGELOG.md` file.
- Structured logging with `structlog`
  - Readable `ConsoleRenderer` for local development
  - `JSONRenderer` for production logging
- Request-Id Middleware with `asgi-correlation-id`
- Expanded README.md with more comprehensive instructions for installation & usage
  - Repository-wide improved documentation details, comments
- CodeSpell exceptions in VSCode workspace settings

### Changed

- Switched from `hypercorn` to `uvicorn` for ASGI runtime
- Switched to direct module 'serve' command in `backend/run.sh` & `backend/railway.json`
- Relocated `.tool-versions` to project root
- Massively overhauled run.sh scripts, mostly for backend service
- Improved environment variable access in logging setup
- Root logger now adheres to the same format as the rest of the application
- Hide IP list when error occurs on client
- `run.sh` passes through all arguments, e.g. bpython REPL via `./run.sh repl`
- Use UTC timezone for timestamps, localize human readable strings, fixing 4 hour offset issue
- `is_development` available globally from `utilities` module

### Removed

- Deprecated `startup` and `shutdown` events
- Development-only randomized IP address pool for testing
This commit is contained in:
2024-11-01 16:44:24 -05:00
committed by GitHub
23 changed files with 578 additions and 199 deletions

View File

@@ -1,2 +0,0 @@
ENVIRONMENT=
DATABASE_URL=

1
.tool-versions Normal file
View File

@@ -0,0 +1 @@
nodejs 22.11.0

20
.vscode/settings.json vendored
View File

@@ -1,5 +1,17 @@
{
"python.analysis.extraPaths": [
"./backend/"
]
}
"cSpell.words": [
"apscheduler",
"bpython",
"Callsite",
"excepthook",
"inmemory",
"linkpulse",
"migratehistory",
"Nixpacks",
"pytz",
"starlette",
"structlog",
"timestamper"
],
"python.analysis.extraPaths": ["./backend/"]
}

37
CHANGELOG.md Normal file
View File

@@ -0,0 +1,37 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.0] - 2024-11-01
### Added
- This `CHANGELOG.md` file.
- Structured logging with `structlog`
- Readable `ConsoleRenderer` for local development
- `JSONRenderer` for production logging
- Request-Id Middleware with `asgi-correlation-id`
- Expanded README.md with more comprehensive instructions for installation & usage
- Repository-wide improved documentation details, comments
- CodeSpell exceptions in VSCode workspace settings
### Changed
- Switched from `hypercorn` to `uvicorn` for ASGI runtime
- Switched to direct module 'serve' command in `backend/run.sh` & `backend/railway.json`
- Relocated `.tool-versions` to project root
- Massively overhauled run.sh scripts, mostly for backend service
- Improved environment variable access in logging setup
- Root logger now adheres to the same format as the rest of the application
- Hide IP list when error occurs on client
- `run.sh` passes through all arguments, e.g. bpython REPL via `./run.sh repl`
- Use UTC timezone for timestamps, localize human readable strings, fixing 4 hour offset issue
- `is_development` available globally from `utilities` module
### Removed
- Deprecated `startup` and `shutdown` events
- Development-only randomized IP address pool for testing

View File

@@ -1,25 +1,49 @@
# linkpulse
This is an empty project right now. It merely holds a simplistic FastAPI server to showcase Railway.
A project for monitoring websites, built with FastAPI and React.
- Windows WSL is recommended for development. See [here][wsl] for setup instructions.
## Structure
## Project Structure
A description of the project's folder structure.
- `/backend` A backend server using [FastAPI][fastapi], managed with [Poetry][poetry].
- `/frontend` A frontend server using [React][react], managed with [pnpm][pnpm].
- `/backend/linkpulse` A python module containing the FastAPI application, database models, migration scripts, and more.
- `/backend/migrations` Migration scripts for [`peewee`][peewee]; most of this is generated automatically.
- `/frontend` A frontend server using [React][react], managed with [pnpm][pnpm], built with [Vite][vite].
- `/frontend/Caddyfile` A Caddy configuration file used for proxying API requests to the backend server via Private Networking (Railway).
- `/frontend/nixpacks.toml` Configures the frontend build process for Nixpacks, enabling the use of Caddy for deployment.
## Setup
Windows WSL is **strongly recommended** for development. See [here][wsl] for setup instructions.
The following instructions were written for Ubuntu 22.04 LTS, the primary (default) target for WSL.
### Frontend
1. Install Node.js 22.x
<!-- TODO: Add details on installation practices, asdf + nvm -->
3. Install `pnpm` with `npm install -g pnpm`
I recommend [`asdf`][asdf] or [`nvm`][nvm] for managing this (although `asdf` is superior in my opinion, and it's tool/language agnostic). [Alternatives are available though](https://nodejs.org/en/download/package-manager).
Assuming you're using Bash/Zsh & Git, you'll need to add this to your bashrc file: `. "$HOME/.asdf/asdf.sh"`. Shell completions are recommended, but optional. Refer to documentation [here][asdf-install] for further detail.
Once added, restart your terminal and `cd` into the project root.
```
asdf plugin add nodejs
asdf install
```
This installs the version of Node.js specified in [`.tool-versions`](.tool-versions).
>[!NOTE] If you use Node.js for other projects, you may want to install the version you need & set it as the global version via `asdf global nodejs <version>` or `asdf install nodejs latest:<version>`. If you don't care, `asdf install latest nodejs` also works.
2. Install `pnpm` with `npm install -g pnpm`
3. `cd frontend`
4. Install frontend dependencies with `pnpm install`
5. Start the frontend server with `./run.sh`
<!-- TODO: Get local Caddy server with Vite builds working. -->
<!-- TODO: Get local Caddy server working. -->
### Backend
@@ -29,21 +53,35 @@ This is an empty project right now. It merely holds a simplistic FastAPI server
2. Install `poetry`
- Requires `pipx`, see [here][pipx]
- Requires `pipx`, see [here][pipx]. You will NOT have this by default. This is NOT `pip`, do not install either with `pip`.
- Install with `pipx install poetry`
3. Install backend dependencies with `poetry install`.
4. Start the backend server with `./run.sh`
5. (*optional*) Install the [Railway CLI][railway]
- Fastest installation is via shell: `bash <(curl -fsSL cli.new)`
- Alternatives found [here][railway].
- This will let us skip creating a local `.env` file, as well as keeping your database URL synchronized.
- You will have to run `railway login` upon install as well as `railway link` in the backend directory.
## Usage
- A fully editable (frontend and backend), automatically reloading project is possible, but it requires two terminals.
- Each terminal must start in the respective directory (`/backend` and `/frontend`).
- `./run.sh` will start the development server in the respective directory.
- The first argument is optional, but can be used in the frontend to compile & serve the backend.
A full stack (*frontend* and *backend*), automatically reloading project is possible, but it requires two terminals.
1. Open a terminal in each respective directory (`/backend` and `/frontend`).
2. Execute `./run.sh` to start the development server for each.
- For the backend, you'll either need to have the `railway` CLI installed or a `.env` file with the database URL.
- See [`.env.example`](backend/.env.example) for a list of all available environment variables.
- For the frontend, the defaults are already sufficient.
>[!WARNING] The `run.sh` scripts provide default environment variables internally; if you want to run the commands manually, you'll need to provide them to `.env` files or the command line.
[peewee]: https://docs.peewee-orm.com/en/latest/
[railway]: https://docs.railway.app/guides/cli
[vite]: https://vite.dev/
[asdf]: https://asdf-vm.com/
[asdf-install]: https://asdf-vm.com/guide/getting-started.html#_3-install-asdf
[nvm]: https://github.com/nvm-sh/nvm
[fastapi]: https://fastapi.tiangolo.com/
[poetry]: https://python-poetry.org/
[react]: https://react.dev/

1
backend/.env.example Normal file
View File

@@ -0,0 +1 @@
DATABASE_URL=

View File

@@ -1,34 +1,59 @@
import os
import sys
import structlog
logger = structlog.get_logger()
def main(*args):
if args[0] == "serve":
import asyncio
from hypercorn import Config
from hypercorn.asyncio import serve
from linkpulse.app import app
from linkpulse.logging import setup_logging
from uvicorn import run
config = Config()
config.use_reloader = True
setup_logging()
logger.debug("Invoking uvicorn.run")
run(
"linkpulse.app:app",
reload=True,
host="0.0.0.0",
port=int(os.getenv("PORT", "8000")),
log_config={
"version": 1,
"disable_existing_loggers": False,
"loggers": {
"uvicorn": {"propagate": True},
"uvicorn.access": {"propagate": True},
},
},
)
asyncio.run(serve(app, config))
elif args[0] == "migrate":
from linkpulse.migrate import main
main(*args[1:])
elif args[0] == "repl":
import linkpulse
lp = linkpulse
import linkpulse
# import most useful objects, models, and functions
lp = linkpulse # alias
from linkpulse.app import app, db
from linkpulse.models import BaseModel, IPAddress
from bpython import embed
# start REPL
from bpython import embed # type: ignore
embed(locals())
else:
print("Invalid command: {}".format(args[0]))
if __name__ == "__main__":
if len(sys.argv) == 1:
main("serve")
else:
# Check that args after aren't all whitespace
remaining_args = ' '.join(sys.argv[1:]).strip()
remaining_args = " ".join(sys.argv[1:]).strip()
if len(remaining_args) > 0:
main(*sys.argv[1:])
main(*sys.argv[1:])

View File

@@ -1,31 +1,32 @@
import logging
import os
import random
from collections import defaultdict
from contextlib import asynccontextmanager
from dataclasses import dataclass, field
from datetime import datetime
from datetime import datetime, timezone
from typing import AsyncIterator
import human_readable
import pytz
import structlog
from apscheduler.schedulers.background import BackgroundScheduler # type: ignore
from apscheduler.triggers.interval import IntervalTrigger # type: ignore
from asgi_correlation_id import CorrelationIdMiddleware
from dotenv import load_dotenv
from fastapi import FastAPI, Request, Response, status
from fastapi.middleware.cors import CORSMiddleware
from fastapi_cache import FastAPICache
from fastapi_cache.backends.inmemory import InMemoryBackend
from fastapi_cache.decorator import cache
import human_readable
from linkpulse.utilities import get_ip, hide_ip, pluralize
from linkpulse.logging import setup_logging
from linkpulse.middleware import LoggingMiddleware
from linkpulse.utilities import get_ip, hide_ip, is_development
from peewee import PostgresqlDatabase
from psycopg2.extras import execute_values
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
load_dotenv(dotenv_path=".env")
from linkpulse import models, responses # type: ignore
is_development = os.getenv("ENVIRONMENT") == "development"
db: PostgresqlDatabase = models.BaseModel._meta.database
db: PostgresqlDatabase = models.BaseModel._meta.database # type: ignore
def flush_ips():
@@ -50,11 +51,11 @@ def flush_ips():
cur = db.cursor()
execute_values(cur, sql, rows)
except:
print("Failed to flush IPs to the database.")
except Exception as e:
logger.error("Failed to flush IPs to Database", error=e)
i = len(app.state.buffered_updates)
print("Flushed {} IP{} to the database.".format(i, pluralize(i)))
logger.debug("Flushed IPs to Database", count=i)
# Finish up
app.state.buffered_updates.clear()
@@ -66,10 +67,6 @@ scheduler.add_job(flush_ips, IntervalTrigger(seconds=5))
@asynccontextmanager
async def lifespan(_: FastAPI) -> AsyncIterator[None]:
FastAPICache.init(
backend=InMemoryBackend(), prefix="fastapi-cache", cache_status_header="X-Cache"
)
if is_development:
# 42 is the answer to everything
random.seed(42)
@@ -78,6 +75,25 @@ async def lifespan(_: FastAPI) -> AsyncIterator[None]:
".".join(str(random.randint(0, 255)) for _ in range(4)) for _ in range(50)
]
# Connect to database, ensure specific tables exist
db.connect()
db.create_tables([models.IPAddress])
# Delete all randomly generated IP addresses
with db.atomic():
logger.info(
"Deleting Randomized IP Addresses", ip_pool_count=len(app.state.ip_pool)
)
query = models.IPAddress.delete().where(
models.IPAddress.ip << app.state.ip_pool
)
row_count = query.execute()
logger.info("Randomized IP Addresses deleted", row_count=row_count)
FastAPICache.init(
backend=InMemoryBackend(), prefix="fastapi-cache", cache_status_header="X-Cache"
)
app.state.buffered_updates = defaultdict(IPCounter)
scheduler.start()
@@ -87,42 +103,39 @@ async def lifespan(_: FastAPI) -> AsyncIterator[None]:
scheduler.shutdown()
flush_ips()
if not db.is_closed():
db.close()
@dataclass
class IPCounter:
# Note: This is not the true 'seen' count, but the count of how many times the IP has been seen since the last flush.
count: int = 0
last_seen: datetime = field(default_factory=datetime.now)
last_seen: datetime = field(default_factory=datetime.utcnow)
app = FastAPI(lifespan=lifespan)
setup_logging()
logger = structlog.get_logger()
if is_development:
origins = [
"http://localhost",
"http://localhost:5173",
]
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_origins=[
"http://localhost",
"http://localhost:5173",
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.on_event("startup")
def startup():
db.connect()
db.create_tables([models.IPAddress])
@app.on_event("shutdown")
def shutdown():
if not db.is_closed():
db.close()
app.add_middleware(LoggingMiddleware)
app.add_middleware(CorrelationIdMiddleware)
@app.get("/health")
@@ -144,25 +157,19 @@ async def get_migration():
return {"name": name, "migrated_at": migrated_at}
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
@app.get("/api/ips")
async def get_ips(request: Request, response: Response):
"""
Returns a list of partially redacted IP addresses, as well as submitting the user's IP address to the database (buffered).
"""
now = datetime.now()
now = datetime.utcnow()
# Get the user's IP address
user_ip = (
get_ip(request) if not is_development else random.choice(app.state.ip_pool)
)
user_ip = get_ip(request)
# If the IP address is not found, return an error
if user_ip is None:
print("No IP found!")
logger.warning("unable to acquire user IP address")
response.status_code = status.HTTP_403_FORBIDDEN
return {"error": "Unable to handle request."}
@@ -183,7 +190,10 @@ async def get_ips(request: Request, response: Response):
"ips": [
responses.SeenIP(
ip=hide_ip(ip.ip) if ip.ip != user_ip else ip.ip,
last_seen=human_readable.date_time(ip.last_seen),
last_seen=human_readable.date_time(
value=pytz.utc.localize(ip.last_seen),
when=datetime.now(timezone.utc),
),
count=ip.count,
)
for ip in latest_ips

View File

@@ -0,0 +1,143 @@
import logging
import os
import sys
from typing import List, Optional
import structlog
from structlog.types import EventDict, Processor
def rename_event_key(_, __, event_dict: EventDict) -> EventDict:
"""
Renames the `event` key to `msg`, as Railway expects it in that form.
"""
event_dict["msg"] = event_dict.pop("event")
return event_dict
def drop_color_message_key(_, __, event_dict: EventDict) -> EventDict:
"""
Uvicorn logs the message a second time in the extra `color_message`, but we don't
need it. This processor drops the key from the event dict if it exists.
"""
event_dict.pop("color_message", None)
return event_dict
def setup_logging(
json_logs: Optional[bool] = None, log_level: Optional[str] = None
) -> None:
json_logs = json_logs or os.getenv("LOG_JSON_FORMAT", "true").lower() == "true"
log_level = log_level or os.getenv("LOG_LEVEL", "INFO")
def flatten(n):
match n:
case []:
return []
case [[*hd], *tl]:
return [*flatten(hd), *flatten(tl)]
case [hd, *tl]:
return [hd, *flatten(tl)]
shared_processors: List[Processor] = flatten(
[
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.stdlib.ExtraAdder(),
drop_color_message_key,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
(
[
rename_event_key,
# Format the exception only for JSON logs, as we want to pretty-print them when using the ConsoleRenderer
structlog.processors.format_exc_info,
]
if json_logs
else []
),
]
)
structlog.configure(
processors=[
*shared_processors,
# Prepare event dict for `ProcessorFormatter`.
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)
log_renderer: structlog.types.Processor
if json_logs:
log_renderer = structlog.processors.JSONRenderer()
else:
log_renderer = structlog.dev.ConsoleRenderer()
formatter = structlog.stdlib.ProcessorFormatter(
# These run ONLY on `logging` entries that do NOT originate within structlog.
foreign_pre_chain=shared_processors,
# These run on ALL entries after the pre_chain is done.
processors=[
# Remove _record & _from_structlog.
structlog.stdlib.ProcessorFormatter.remove_processors_meta,
log_renderer,
],
)
handler = logging.StreamHandler()
# Use OUR `ProcessorFormatter` to format all `logging` entries.
handler.setFormatter(formatter)
root_logger = logging.getLogger()
root_logger.addHandler(handler)
root_logger.setLevel(log_level.upper())
def configure_logger(
name: str,
level: Optional[str] = None,
clear: Optional[bool] = None,
propagate: Optional[bool] = None,
) -> None:
logger = logging.getLogger(name)
if level is not None:
logger.setLevel(level.upper())
if clear is True:
logger.handlers.clear()
if propagate is not None:
logger.propagate = propagate
# Clear the log handlers for uvicorn loggers, and enable propagation
# so the messages are caught by our root logger and formatted correctly
# by structlog
configure_logger("uvicorn", clear=True, propagate=True)
configure_logger("uvicorn.error", clear=True, propagate=True)
configure_logger("apscheduler.executors.default", level="WARNING")
# Since we re-create the access logs ourselves, to add all information
# in the structured log (see the `logging_middleware` in main.py), we clear
# the handlers and prevent the logs to propagate to a logger higher up in the
# hierarchy (effectively rendering them silent).
configure_logger("uvicorn.access", clear=True, propagate=False)
def handle_exception(exc_type, exc_value, exc_traceback):
"""
Log any uncaught exception instead of letting it be printed by Python
(but leave KeyboardInterrupt untouched to allow users to Ctrl+C to stop)
See https://stackoverflow.com/a/16993115/3641865
"""
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
root_logger.error(
"Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback)
)
sys.excepthook = handle_exception

View File

@@ -0,0 +1,58 @@
import time
from asgi_correlation_id import correlation_id
import structlog
from linkpulse.utilities import is_development
from fastapi import FastAPI, Request, Response
from starlette.middleware.base import BaseHTTPMiddleware
class LoggingMiddleware(BaseHTTPMiddleware):
def __init__(self, app: FastAPI):
super().__init__(app)
self.access_logger = structlog.get_logger("api.access")
async def dispatch(self, request: Request, call_next) -> Response:
structlog.contextvars.clear_contextvars()
# These context vars will be added to all log entries emitted during the request
request_id = correlation_id.get()
structlog.contextvars.bind_contextvars(request_id=request_id)
start_time = time.perf_counter_ns()
# If the call_next raises an error, we still want to return our own 500 response,
# so we can add headers to it (process time, request ID...)
response = Response(status_code=500)
try:
response = await call_next(request)
except Exception:
# TODO: Validate that we don't swallow exceptions (unit test?)
structlog.stdlib.get_logger("api.error").exception("Uncaught exception")
raise
finally:
process_time_ms = "{:.2f}".format(
(time.perf_counter_ns() - start_time) / 10**6
)
self.access_logger.debug(
"Request",
http={
"url": str(request.url),
"query": dict(request.query_params),
"status_code": response.status_code,
"method": request.method,
"request_id": request_id,
"version": request.scope["http_version"],
},
client=(
{"ip": request.client.host, "port": request.client.port}
if request.client
else None
),
duration_ms=process_time_ms,
)
if is_development:
response.headers["X-Process-Time"] = process_time_ms
return response

View File

@@ -1,4 +1,3 @@
import os
import pkgutil
import re
import sys

View File

@@ -2,12 +2,13 @@ from peewee import Model, CharField, DateTimeField, IntegerField
from playhouse.db_url import connect
from os import environ
class BaseModel(Model):
class Meta:
database = connect(url=environ.get('DATABASE_URL'))
database = connect(url=environ.get("DATABASE_URL"))
class IPAddress(BaseModel):
ip = CharField(primary_key=True)
last_seen = DateTimeField()
count = IntegerField(default=0)
count = IntegerField(default=0)

View File

@@ -1,8 +1,7 @@
from pydantic import BaseModel
from datetime import datetime
class SeenIP(BaseModel):
ip: str
last_seen: str
count: int
count: int

View File

@@ -1,37 +1,44 @@
import os
from typing import Optional
from fastapi import Request
is_development = os.getenv("ENVIRONMENT") == "development"
def pluralize(count: int) -> str:
def pluralize(count: int, word: Optional[str] = None) -> str:
"""
Pluralize a word based on count. Returns 's' if count is not 1, '' (empty string) otherwise.
"""
return 's' if count != 1 else ''
if word:
return word + "s" if count != 1 else word
return "s" if count != 1 else ""
def get_ip(request: Request) -> Optional[str]:
"""
This function attempts to retrieve the client's IP address from the request headers.
It first checks the 'X-Forwarded-For' header, which is commonly used in proxy setups.
If the header is present, it returns the first IP address in the list.
If the header is not present, it falls back to the client's direct connection IP address.
If neither is available, it returns None.
Args:
request (Request): The request object containing headers and client information.
Returns:
Optional[str]: The client's IP address if available, otherwise None.
"""
x_forwarded_for = request.headers.get('X-Forwarded-For')
x_forwarded_for = request.headers.get("X-Forwarded-For")
if x_forwarded_for:
return x_forwarded_for.split(',')[0]
return x_forwarded_for.split(",")[0]
if request.client:
return request.client.host
return None
def hide_ip(ip: str, hidden_octets: Optional[int] = None) -> str:
"""
Hide the last octet(s) of an IP address.
@@ -46,26 +53,29 @@ def hide_ip(ip: str, hidden_octets: Optional[int] = None) -> str:
Examples:
>>> hide_ip("192.168.1.1")
'192.168.1.X'
>>> hide_ip("192.168.1.1", 2)
'192.168.X.X'
>>> hide_ip("2001:0db8:85a3:0000:0000:8a2e:0370:7334")
'2001:0db8:85a3:0000:0000:XXXX:XXXX:XXXX'
>>> hide_ip("2001:0db8:85a3:0000:0000:8a2e:0370:7334", 4)
'2001:0db8:85a3:0000:XXXX:XXXX:XXXX:XXXX'
"""
ipv6 = ':' in ip
ipv6 = ":" in ip
# Make sure that IPv4 (dot) and IPv6 (colon) addresses are not mixed together somehow. Not a comprehensive check.
if ipv6 == ('.' in ip):
if ipv6 == ("." in ip):
raise ValueError("Invalid IP address format. Must be either IPv4 or IPv6.")
total_octets = 8 if ipv6 else 4
separator = ':' if ipv6 else '.'
replacement = 'XXXX' if ipv6 else 'X'
separator = ":" if ipv6 else "."
replacement = "XXXX" if ipv6 else "X"
if hidden_octets is None:
hidden_octets = 3 if ipv6 else 1
return separator.join(ip.split(separator, total_octets - hidden_octets)[:-1]) + (separator + replacement) * hidden_octets
return (
separator.join(ip.split(separator, total_octets - hidden_octets)[:-1])
+ (separator + replacement) * hidden_octets
)

View File

@@ -1,2 +1,3 @@
[variables]
# Otherwise, Poetry will use a very old & incompatible version, 1.3.1
NIXPACKS_POETRY_VERSION='1.8.4'

144
backend/poetry.lock generated
View File

@@ -70,6 +70,24 @@ tornado = ["tornado (>=4.3)"]
twisted = ["twisted"]
zookeeper = ["kazoo"]
[[package]]
name = "asgi-correlation-id"
version = "4.3.4"
description = "Middleware correlating project logs to individual requests"
optional = false
python-versions = "<4.0,>=3.8"
files = [
{file = "asgi_correlation_id-4.3.4-py3-none-any.whl", hash = "sha256:36ce69b06c7d96b4acb89c7556a4c4f01a972463d3d49c675026cbbd08e9a0a2"},
{file = "asgi_correlation_id-4.3.4.tar.gz", hash = "sha256:ea6bc310380373cb9f731dc2e8b2b6fb978a76afe33f7a2384f697b8d6cd811d"},
]
[package.dependencies]
packaging = "*"
starlette = ">=0.18"
[package.extras]
celery = ["celery"]
[[package]]
name = "blessed"
version = "1.20.0"
@@ -460,32 +478,6 @@ files = [
{file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"},
]
[[package]]
name = "h2"
version = "4.1.0"
description = "HTTP/2 State-Machine based protocol implementation"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "h2-4.1.0-py3-none-any.whl", hash = "sha256:03a46bcf682256c95b5fd9e9a99c1323584c3eec6440d379b9903d709476bc6d"},
{file = "h2-4.1.0.tar.gz", hash = "sha256:a83aca08fbe7aacb79fec788c9c0bac936343560ed9ec18b82a13a12c28d2abb"},
]
[package.dependencies]
hpack = ">=4.0,<5"
hyperframe = ">=6.0,<7"
[[package]]
name = "hpack"
version = "4.0.0"
description = "Pure-Python HPACK header compression"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "hpack-4.0.0-py3-none-any.whl", hash = "sha256:84a076fad3dc9a9f8063ccb8041ef100867b1878b25ef0ee63847a5d53818a6c"},
{file = "hpack-4.0.0.tar.gz", hash = "sha256:fc41de0c63e687ebffde81187a948221294896f6bdc0ae2312708df339430095"},
]
[[package]]
name = "human-readable"
version = "1.3.4"
@@ -497,40 +489,6 @@ files = [
{file = "human_readable-1.3.4.tar.gz", hash = "sha256:5726eac89066ec25d14447a173e645a855184645d024eb306705e2bfbb60f0c0"},
]
[[package]]
name = "hypercorn"
version = "0.14.4"
description = "A ASGI Server based on Hyper libraries and inspired by Gunicorn"
optional = false
python-versions = ">=3.7"
files = [
{file = "hypercorn-0.14.4-py3-none-any.whl", hash = "sha256:f956200dbf8677684e6e976219ffa6691d6cf795281184b41dbb0b135ab37b8d"},
{file = "hypercorn-0.14.4.tar.gz", hash = "sha256:3fa504efc46a271640023c9b88c3184fd64993f47a282e8ae1a13ccb285c2f67"},
]
[package.dependencies]
h11 = "*"
h2 = ">=3.1.0"
priority = "*"
wsproto = ">=0.14.0"
[package.extras]
docs = ["pydata_sphinx_theme"]
h3 = ["aioquic (>=0.9.0,<1.0)"]
trio = ["exceptiongroup (>=1.1.0)", "trio (>=0.22.0)"]
uvloop = ["uvloop"]
[[package]]
name = "hyperframe"
version = "6.0.1"
description = "HTTP/2 framing layer for Python"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "hyperframe-6.0.1-py3-none-any.whl", hash = "sha256:0ec6bafd80d8ad2195c4f03aacba3a8265e57bc4cff261e802bf39970ed02a15"},
{file = "hyperframe-6.0.1.tar.gz", hash = "sha256:ae510046231dc8e9ecb1a6586f63d2347bf4c8905914aa84ba585ae85f28a914"},
]
[[package]]
name = "idna"
version = "3.10"
@@ -573,6 +531,17 @@ files = [
[package.dependencies]
psutil = "*"
[[package]]
name = "packaging"
version = "24.1"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
files = [
{file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"},
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"},
]
[[package]]
name = "peewee"
version = "3.17.7"
@@ -697,17 +666,6 @@ tzdata = ">=2020.1"
[package.extras]
test = ["time-machine (>=2.6.0)"]
[[package]]
name = "priority"
version = "2.0.0"
description = "A pure-Python implementation of the HTTP/2 priority tree"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "priority-2.0.0-py3-none-any.whl", hash = "sha256:6f8eefce5f3ad59baf2c080a664037bb4725cd0a790d53d59ab4059288faf6aa"},
{file = "priority-2.0.0.tar.gz", hash = "sha256:c965d54f1b8d0d0b19479db3924c7c36cf672dbf2aec92d43fbdaf4492ba18c0"},
]
[[package]]
name = "prompt-toolkit"
version = "3.0.36"
@@ -1030,6 +988,23 @@ anyio = ">=3.4.0,<5"
[package.extras]
full = ["httpx (>=0.22.0)", "itsdangerous", "jinja2", "python-multipart", "pyyaml"]
[[package]]
name = "structlog"
version = "24.4.0"
description = "Structured Logging for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "structlog-24.4.0-py3-none-any.whl", hash = "sha256:597f61e80a91cc0749a9fd2a098ed76715a1c8a01f73e336b746504d1aad7610"},
{file = "structlog-24.4.0.tar.gz", hash = "sha256:b27bfecede327a6d2da5fbc96bd859f114ecc398a6389d664f62085ee7ae6fc4"},
]
[package.extras]
dev = ["freezegun (>=0.2.8)", "mypy (>=1.4)", "pretend", "pytest (>=6.0)", "pytest-asyncio (>=0.17)", "rich", "simplejson", "twisted"]
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-mermaid", "sphinxext-opengraph", "twisted"]
tests = ["freezegun (>=0.2.8)", "pretend", "pytest (>=6.0)", "pytest-asyncio (>=0.17)", "simplejson"]
typing = ["mypy (>=1.4)", "rich", "twisted"]
[[package]]
name = "types-peewee"
version = "3.17.7.20241017"
@@ -1052,6 +1027,17 @@ files = [
{file = "types_psycopg2-2.9.21.20241019-py3-none-any.whl", hash = "sha256:44d091e67732d16a941baae48cd7b53bf91911bc36888652447cf1ef0c1fb3f6"},
]
[[package]]
name = "types-pytz"
version = "2024.2.0.20241003"
description = "Typing stubs for pytz"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-pytz-2024.2.0.20241003.tar.gz", hash = "sha256:575dc38f385a922a212bac00a7d6d2e16e141132a3c955078f4a4fd13ed6cb44"},
{file = "types_pytz-2024.2.0.20241003-py3-none-any.whl", hash = "sha256:3e22df1336c0c6ad1d29163c8fda82736909eb977281cb823c57f8bae07118b7"},
]
[[package]]
name = "typing-extensions"
version = "4.12.2"
@@ -1137,21 +1123,7 @@ files = [
{file = "wcwidth-0.2.13.tar.gz", hash = "sha256:72ea0c06399eb286d978fdedb6923a9eb47e1c486ce63e9b4e64fc18303972b5"},
]
[[package]]
name = "wsproto"
version = "1.2.0"
description = "WebSockets state-machine based protocol implementation"
optional = false
python-versions = ">=3.7.0"
files = [
{file = "wsproto-1.2.0-py3-none-any.whl", hash = "sha256:b9acddd652b585d75b20477888c56642fdade28bdfd3579aa24a4d2c037dd736"},
{file = "wsproto-1.2.0.tar.gz", hash = "sha256:ad565f26ecb92588a3e43bc3d96164de84cd9902482b130d0ddbaa9664a85065"},
]
[package.dependencies]
h11 = ">=0.9.0,<1"
[metadata]
lock-version = "2.0"
python-versions = "^3.12"
content-hash = "e69fd1560f0fe7e4c5a4c64918fb7c9dab13a3f76a37b92756d12c06c40a466e"
content-hash = "a0cc32861b71da789edc5df54e79239d6cca81cb3d14984a1306a3f92735589f"

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "linkpulse"
version = "0.1.0"
version = "0.2.0"
description = ""
authors = ["Xevion <xevion@xevion.dev>"]
license = "GNU GPL v3"
@@ -13,7 +13,6 @@ app = "linkpulse"
[tool.poetry.dependencies]
python = "^3.12"
fastapi = "0.100"
Hypercorn = "0.14.4"
python-dotenv = "^1.0.1"
peewee = "^3.17.7"
peewee-migrate = "^1.13.0"
@@ -24,11 +23,15 @@ questionary = "^2.0.1"
apscheduler = "^3.10.4"
human-readable = "^1.3.4"
psycopg2 = "^2.9.10"
structlog = "^24.4.0"
uvicorn = "^0.32.0"
asgi-correlation-id = "^4.3.4"
[tool.poetry.group.dev.dependencies]
memory-profiler = "^0.61.0"
bpython = "^0.24"
types-pytz = "^2024.2.0.20241003"
[build-system]
requires = ["poetry-core"]

View File

@@ -4,6 +4,6 @@
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "hypercorn linkpulse.app:app --bind \"[::]:$PORT\""
"startCommand": "python3 -m linkpulse serve"
}
}

View File

@@ -1,3 +1,62 @@
#!/usr/bin/env bash
poetry run hypercorn linkpulse.app:app --reload
# Check whether CWD is 'backend'
if [ "$(basename "$(pwd)")" != "backend" ]; then
echo "error: This script must be run from the 'backend' directory."
exit 1
fi
# Default to development mode if not defined
export ENVIRONMENT=${ENVIRONMENT:-development}
COMMAND='poetry run python3 -m linkpulse'
# Check if Railway CLI is available
RAILWAY_AVAILABLE=false
if command -v railway &>/dev/null; then
RAILWAY_AVAILABLE=true
fi
# Check if .env file exists
ENV_FILE_EXISTS=false
if [ -f .env ]; then
ENV_FILE_EXISTS=true
fi
# Check if DATABASE_URL is defined
DATABASE_DEFINED=false
if [ -n "$DATABASE_URL" ]; then
DATABASE_DEFINED=true
else
if $ENV_FILE_EXISTS; then
if grep -E '^DATABASE_URL=.+' .env &>/dev/null; then
DATABASE_DEFINED=true
fi
fi
fi
# Check if Railway project is linked
PROJECT_LINKED=false
if $RAILWAY_AVAILABLE; then
if railway status &>/dev/null; then
PROJECT_LINKED=true
fi
fi
if $DATABASE_DEFINED; then
$COMMAND $@
else
if $RAILWAY_AVAILABLE; then
if $PROJECT_LINKED; then
DATABASE_URL="$(railway variables -s Postgres --json | jq .DATABASE_PUBLIC_URL -cMr)" $COMMAND $@
else
echo "error: Railway project not linked."
echo "Run 'railway link' to link the project."
exit 1
fi
else
echo "error: Could not find DATABASE_URL environment variable."
echo "Install the Railway CLI and link the project, or create a .env file with a DATABASE_URL variable."
exit 1
fi
fi

1
frontend/.env.example Normal file
View File

@@ -0,0 +1 @@
VITE_BACKEND_TARGET=

View File

@@ -1 +0,0 @@
nodejs 22.9.0

View File

@@ -1,3 +1,10 @@
#!/usr/bin/env bash
# Check whether CWD is 'frontend'
if [ "$(basename "$(pwd)")" != "frontend" ]; then
echo "error: This script must be run from the 'frontend' directory."
exit 1
fi
export VITE_BACKEND_TARGET=${VITE_BACKEND_TARGET:-localhost:8000}
pnpm run dev

View File

@@ -66,17 +66,22 @@ export default function App() {
<div className="relative overflow-x-auto">
<table className="w-full text-left text-sm text-gray-500 rtl:text-right dark:text-gray-300">
<tbody>
{seenIps.map((ip) => (
<tr key={ip.ip} className="border-b last:border-0 bg-white dark:border-neutral-700 dark:bg-neutral-800">
<td className="py-4">
<Code>{ip.ip}</Code>
</td>
<td className="py-4">
{ip.count} time{ip.count > 1 ? 's' : ''}
</td>
<td className="py-4">{ip.last_seen}</td>
</tr>
))}
{error == null
? seenIps.map((ip) => (
<tr
key={ip.ip}
className="border-b bg-white last:border-0 dark:border-neutral-700 dark:bg-neutral-800"
>
<td className="py-4">
<Code>{ip.ip}</Code>
</td>
<td className="py-4">
{ip.count} time{ip.count > 1 ? 's' : ''}
</td>
<td className="py-4">{ip.last_seen}</td>
</tr>
))
: null}
</tbody>
</table>
</div>