53 Commits

Author SHA1 Message Date
24a3ae5f24 Merge pull request #9 from Xevion/0.2.2
### Added

- Added the `orjson` serializer for faster JSON serialization
  - Used in `structlog`'s `JSONRenderer` for production logging
  - Used in `fastapi`'s `Response` for faster response serialization
- Improved documentation in multiple files
  - `__main__.py`
  - `logging.py`
  - `models.py`
  - `utilities.py`
  - `migrate.py`
  - `responses.py`
- A `get_db` utility function to retrieve a reference to the database (with type hinting)
- Minor `DATABASE_URL` check in `models.py` to prevent cryptic connection issues

## Changed

- Migration script now uses `structlog` instead of `print`
  - Migration script output is tuned to structlog as well.
- Migration names must be at least 9 characters long
- Unspecified IPv6 addresses are returned without hiding in `utilities.hide_ip`
- Applied `get_db` utility function in all applicable areas.

### Fixed

- Raised level for `apscheduler.scheduler` logger to `WARNING` to prevent excessive logging
- IPv4 interface bind in production, preventing Railway's Private Networking from functioning
- Reloader mode enabled in production
2024-11-01 19:20:23 -05:00
f8d1edcf3b Bump project version to 0.2.2 2024-11-01 19:18:36 -05:00
43bf96e5c1 Add orjson JSON serializer for FastAPI & structlog performance 2024-11-01 19:17:57 -05:00
b561ec6998 Improve migrate, responses docs, require min length 9 migration name (validator) 2024-11-01 18:42:57 -05:00
75e875e61d Switch migrate.py to structlog, remove unused old testing code 2024-11-01 18:37:10 -05:00
3b3f3ba784 Fix unspecified IPv6 addresses from being malformed by hide_ip, fix double private get_database_url breaking 2024-11-01 18:24:06 -05:00
204be26201 Improve migrate.py docs, variable names 2024-11-01 18:17:38 -05:00
b7d9b256d9 Minor documentation improvement in utilities.py 2024-11-01 18:12:34 -05:00
01f6d348cd Improve models.py documentation, small DATABASE_URL check 2024-11-01 18:10:49 -05:00
cf7536a39b Add get_db utility function
- Minor changes in flush_ips log messages
2024-11-01 18:03:32 -05:00
85a2d82832 Remove deprecated utcnow() usage, pass UTC TzInfo instead 2024-11-01 17:54:36 -05:00
1ecab265ac Raise level for apscheduler.scheduler logger, add TODO for easier log configuration 2024-11-01 17:53:28 -05:00
b67272392a Improve logging.py documentation 2024-11-01 17:51:22 -05:00
0407dba4d1 Updated CHANGELOG.md 2024-11-01 17:46:56 -05:00
52df0c571f Fix IPv4 interface bind in production, fix reloader enabled in production 2024-11-01 17:45:37 -05:00
65701b7178 Improve entrypoint documentation & debug logs 2024-11-01 17:45:07 -05:00
53bf74dcd7 Merge pull request #7 from Xevion/0.2-fix
### Changed

- Mildly reformatted `README.md`
- A development mode check for the `app.state.ip_pool`'s initialization (caused application failure in production only)

### Fixed

- Improper formatting of blockquote Alerts in `README.md`
2024-11-01 17:11:49 -05:00
e61b2a7f60 Bump project version to 0.2.1 2024-11-01 17:11:01 -05:00
185b2f5589 Fixed blockquote alerts in README, mild reformatting 2024-11-01 16:58:55 -05:00
7a27175423 Remove development mode check for ip_pool generation
I just wanna see it run on the production serve once, I'll remove all of
this later.
2024-11-01 16:57:37 -05:00
2b1886acd9 Merge pull request #6 from Xevion/0.2
### Added

- This `CHANGELOG.md` file.
- Structured logging with `structlog`
  - Readable `ConsoleRenderer` for local development
  - `JSONRenderer` for production logging
- Request-Id Middleware with `asgi-correlation-id`
- Expanded README.md with more comprehensive instructions for installation & usage
  - Repository-wide improved documentation details, comments
- CodeSpell exceptions in VSCode workspace settings

### Changed

- Switched from `hypercorn` to `uvicorn` for ASGI runtime
- Switched to direct module 'serve' command in `backend/run.sh` & `backend/railway.json`
- Relocated `.tool-versions` to project root
- Massively overhauled run.sh scripts, mostly for backend service
- Improved environment variable access in logging setup
- Root logger now adheres to the same format as the rest of the application
- Hide IP list when error occurs on client
- `run.sh` passes through all arguments, e.g. bpython REPL via `./run.sh repl`
- Use UTC timezone for timestamps, localize human readable strings, fixing 4 hour offset issue
- `is_development` available globally from `utilities` module

### Removed

- Deprecated `startup` and `shutdown` events
- Development-only randomized IP address pool for testing
2024-11-01 16:44:24 -05:00
a894dd83c1 Add CHANGELOG.md 2024-11-01 16:40:17 -05:00
e1bbeedaf2 Remove 'ms' suffix from 'duration' log key, re-enable X-Process-Time header in development mode 2024-11-01 16:38:20 -05:00
441ab00da3 Move is_development into utilities.py 2024-11-01 16:37:39 -05:00
40669b8f08 Fix human_readable not handling UTC dates properly, add types-pytz 2024-11-01 16:36:47 -05:00
daf9254596 Passthrough arguments for backend/run.sh 2024-11-01 16:23:23 -05:00
40385c9739 Hide, don't clear seenIps upon error 2024-11-01 16:15:35 -05:00
10b93d41d6 Reformat all python files, remove unused imports 2024-11-01 16:13:01 -05:00
4b85153065 Clear client's seen IPs list upon error 2024-11-01 16:12:37 -05:00
57aa841871 Use datetime.utcnow instead, eliminate timezone consideration 2024-11-01 16:12:16 -05:00
8b85fe7040 Add VSCode Spellcheck manual exceptions 2024-11-01 16:10:19 -05:00
796c28d72d Switch railway.json startCommand to module serve, use PORT variable 2024-11-01 16:02:40 -05:00
267abfe792 Remove old hypercorn command with unused logging.toml 2024-11-01 16:01:58 -05:00
6fe1a8b60f Use logger not logging, use keyword arguments for structured logging 2024-11-01 15:47:28 -05:00
9336fb5506 Set access logs to debug, millisecond process time, pluralize word option
I'm unsure if it's good to use string notation in the duration, maybe duration_ms to imply the unit would be better?
2024-11-01 15:47:28 -05:00
bcb1441251 Switch main app loggers to structlog, fix improper structlogs 2024-11-01 15:47:28 -05:00
f93df47b67 Disable X-Process-Time response header 2024-11-01 15:47:28 -05:00
3232e89d0a Bump project version to 0.2.0 2024-11-01 15:47:28 -05:00
a873c4785d Access environment variables directly in setup_logging 2024-11-01 15:47:28 -05:00
1741739310 Disable handlers, setup propagation with uvicorn log_config
Apparently this was what I have been chasing for the last few hours.
2024-11-01 15:47:28 -05:00
3a2ef75086 Add ASGI Request-Id correlation, add structlog LoggingMiddleware, overhaul all logging
- minor formatting details, type fixes.
2024-11-01 15:47:28 -05:00
a96631e81e Setup structlog, delete randomized IPs on startup
- minor formatting, type fixes
2024-11-01 15:47:28 -05:00
0816ddcdca Switch from hypercorn to uvicorn, structlog testing 2024-11-01 15:47:28 -05:00
91cc8e24b6 Remove deprecated startup/shutdown events into proper applicaiton Lifespan definition 2024-11-01 15:47:28 -05:00
f8b76c757c Initial logging improvements, switch run.sh to direct module 'serve' cmd 2024-11-01 15:47:28 -05:00
902eb74deb Add 'structlog' module 2024-11-01 15:47:28 -05:00
5a288cf87c Only import CORSMiddleware in development mode 2024-11-01 15:47:28 -05:00
cb76965a43 Add run.sh warning note to README 2024-11-01 15:47:28 -05:00
af91adeca3 Overhaul README.md with more instructions (env vars, railway CLI, usage, asdf install) 2024-11-01 15:47:28 -05:00
5390fb57a7 Overhaul run.sh scripts
- Default environment variables through this, instead of .env
- Added some basic checks to ensure developers don't stub their toe
- Use `railway link`, injecting environment variables without insecure dotenv files
2024-11-01 15:47:28 -05:00
f034b41da1 Add .env.example in proper places 2024-11-01 15:47:28 -05:00
109e09df50 Add note on purpose of poetry version in Nixpacks config 2024-11-01 15:47:28 -05:00
b962966080 Move .tool-versions to project root 2024-11-01 15:47:22 -05:00
23 changed files with 865 additions and 282 deletions

View File

@@ -1,2 +0,0 @@
ENVIRONMENT=
DATABASE_URL=

1
.tool-versions Normal file
View File

@@ -0,0 +1 @@
nodejs 22.11.0

21
.vscode/settings.json vendored
View File

@@ -1,5 +1,18 @@
{
"python.analysis.extraPaths": [
"./backend/"
]
}
"cSpell.words": [
"apscheduler",
"bpython",
"Callsite",
"excepthook",
"inmemory",
"linkpulse",
"migratehistory",
"Nixpacks",
"ORJSON",
"pytz",
"starlette",
"structlog",
"timestamper"
],
"python.analysis.extraPaths": ["./backend/"]
}

79
CHANGELOG.md Normal file
View File

@@ -0,0 +1,79 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.2] - 2024-11-01
### Added
- Added the `orjson` serializer for faster JSON serialization
- Used in `structlog`'s `JSONRenderer` for production logging
- Used in `fastapi`'s `Response` for faster response serialization
- Improved documentation in multiple files
- `__main__.py`
- `logging.py`
- `models.py`
- `utilities.py`
- `migrate.py`
- `responses.py`
- A `get_db` utility function to retrieve a reference to the database (with type hinting)
- Minor `DATABASE_URL` check in `models.py` to prevent cryptic connection issues
## Changed
- Migration script now uses `structlog` instead of `print`
- Migration script output is tuned to structlog as well.
- Migration names must be at least 9 characters long
- Unspecified IPv6 addresses are returned without hiding in `utilities.hide_ip`
- Applied `get_db` utility function in all applicable areas.
### Fixed
- Raised level for `apscheduler.scheduler` logger to `WARNING` to prevent excessive logging
- IPv4 interface bind in production, preventing Railway's Private Networking from functioning
- Reloader mode enabled in production
## [0.2.1] - 2024-11-01
### Changed
- Mildly reformatted `README.md`
- A development mode check for the `app.state.ip_pool`'s initialization (caused application failure in production only)
### Fixed
- Improper formatting of blockquote Alerts in `README.md`
## [0.2.0] - 2024-11-01
### Added
- This `CHANGELOG.md` file
- Structured logging with `structlog`
- Readable `ConsoleRenderer` for local development
- `JSONRenderer` for production logging
- Request-Id Middleware with `asgi-correlation-id`
- Expanded README.md with more comprehensive instructions for installation & usage
- Repository-wide improved documentation details, comments
- CodeSpell exceptions in VSCode workspace settings
### Changed
- Switched from `hypercorn` to `uvicorn` for ASGI runtime
- Switched to direct module 'serve' command in `backend/run.sh` & `backend/railway.json`
- Relocated `.tool-versions` to project root
- Massively overhauled run.sh scripts, mostly for backend service
- Improved environment variable access in logging setup
- Root logger now adheres to the same format as the rest of the application
- Hide IP list when error occurs on client
- `run.sh` passes through all arguments, e.g. bpython REPL via `./run.sh repl`
- Use UTC timezone for timestamps, localize human readable strings, fixing 4 hour offset issue
- `is_development` available globally from `utilities` module
### Removed
- Deprecated `startup` and `shutdown` events
- Development-only randomized IP address pool for testing

View File

@@ -1,49 +1,89 @@
# linkpulse
This is an empty project right now. It merely holds a simplistic FastAPI server to showcase Railway.
A project for monitoring websites, built with FastAPI and React.
- Windows WSL is recommended for development. See [here][wsl] for setup instructions.
## Structure
## Project Structure
A description of the project's folder structure.
- `/backend` A backend server using [FastAPI][fastapi], managed with [Poetry][poetry].
- `/frontend` A frontend server using [React][react], managed with [pnpm][pnpm].
- `/backend/linkpulse` A python module containing the FastAPI application, database models, migration scripts, and more.
- `/backend/migrations` Migration scripts for [`peewee`][peewee]; most of this is generated automatically.
- `/frontend` A frontend server using [React][react], managed with [pnpm][pnpm], built with [Vite][vite].
- `/frontend/Caddyfile` A Caddy configuration file used for proxying API requests to the backend server via Private Networking (Railway).
- `/frontend/nixpacks.toml` Configures the frontend build process for Nixpacks, enabling the use of Caddy for deployment.
## Setup
Windows WSL is **strongly recommended** for development. See [here][wsl] for setup instructions.
The following instructions were written for Ubuntu 22.04 LTS, the primary (default) target for WSL.
### Frontend
1. Install Node.js 22.x
<!-- TODO: Add details on installation practices, asdf + nvm -->
3. Install `pnpm` with `npm install -g pnpm`
I recommend [`asdf`][asdf] or [`nvm`][nvm] for managing this (although `asdf` is superior in my opinion, and it's tool/language agnostic). [Alternatives are available though](https://nodejs.org/en/download/package-manager).
Assuming you're using Bash/Zsh & Git, you'll need to add this to your bashrc file: `. "$HOME/.asdf/asdf.sh"`. Shell completions are recommended, but optional. Refer to documentation [here][asdf-install] for further detail.
Once added, restart your terminal and `cd` into the project root.
```
asdf plugin add nodejs
asdf install
```
This installs the version of Node.js specified in [`.tool-versions`](.tool-versions).
> [!NOTE]
> If you use Node.js for other projects, you may want to install the version you need & set it as the global version via `asdf global nodejs <version>` or `asdf install nodejs latest:<version>`. If you don't care, `asdf install latest nodejs` also works.
2. Install `pnpm` with `npm install -g pnpm`
3. `cd frontend`
4. Install frontend dependencies with `pnpm install`
5. Start the frontend server with `./run.sh`
<!-- TODO: Get local Caddy server with Vite builds working. -->
<!-- TODO: Get local Caddy server working. -->
### Backend
1. Install [`pyenv`][pyenv] or [`pyenv-win`][pyenv-win]
- Install Python 3.12 (`pyenv install 3.12`)
- Install Python 3.12 (`pyenv install 3.12`)
2. Install `poetry`
- Requires `pipx`, see [here][pipx]
- Install with `pipx install poetry`
- Requires `pipx`, see [here][pipx]. You will NOT have this by default. This is NOT `pip`, do not install either with `pip`.
- Install with `pipx install poetry`
3. Install backend dependencies with `poetry install`.
4. Start the backend server with `./run.sh`
5. (_optional_) Install the [Railway CLI][railway]
- Fastest installation is via shell: `bash <(curl -fsSL cli.new)`
- Alternatives found [here][railway].
- This will let us skip creating a local `.env` file, as well as keeping your database URL synchronized.
- You will have to run `railway login` upon install as well as `railway link` in the backend directory.
## Usage
- A fully editable (frontend and backend), automatically reloading project is possible, but it requires two terminals.
- Each terminal must start in the respective directory (`/backend` and `/frontend`).
- `./run.sh` will start the development server in the respective directory.
- The first argument is optional, but can be used in the frontend to compile & serve the backend.
A full stack (_frontend_ and _backend_), automatically reloading project is possible, but it requires two terminals.
1. Open a terminal in each respective directory (`/backend` and `/frontend`).
2. Execute `./run.sh` to start the development server for each.
- For the backend, you'll either need to have the `railway` CLI installed or a `.env` file with the database URL.
- See [`.env.example`](backend/.env.example) for a list of all available environment variables.
- For the frontend, the defaults are already sufficient.
> [!WARNING]
> The `run.sh` scripts provide default environment variables internally; if you want to run the commands manually, you'll need to provide them to `.env` files or the command line.
[peewee]: https://docs.peewee-orm.com/en/latest/
[railway]: https://docs.railway.app/guides/cli
[vite]: https://vite.dev/
[asdf]: https://asdf-vm.com/
[asdf-install]: https://asdf-vm.com/guide/getting-started.html#_3-install-asdf
[nvm]: https://github.com/nvm-sh/nvm
[fastapi]: https://fastapi.tiangolo.com/
[poetry]: https://python-poetry.org/
[react]: https://react.dev/
@@ -51,4 +91,4 @@ This is an empty project right now. It merely holds a simplistic FastAPI server
[wsl]: https://docs.microsoft.com/en-us/windows/wsl/install
[pipx]: https://pipx.pypa.io/stable/installation/
[pyenv]: https://github.com/pyenv/pyenv
[pyenv-win]: https://github.com/pyenv-win/pyenv-win
[pyenv-win]: https://github.com/pyenv-win/pyenv-win

1
backend/.env.example Normal file
View File

@@ -0,0 +1 @@
DATABASE_URL=

View File

@@ -1,34 +1,90 @@
"""
This module serves as the entry point for the LinkPulse application. It provides
command-line interface (CLI) commands to serve the application, run migrations,
or start a REPL (Read-Eval-Print Loop) session.
Commands:
- serve: Starts the application server using Uvicorn.
- migrate: Runs database migrations.
- repl: Starts an interactive Python shell with pre-imported objects and models.
"""
from linkpulse.logging import setup_logging
# We want to setup logging as early as possible.
setup_logging()
import os
import sys
import structlog
logger = structlog.get_logger()
def main(*args):
"""
Primary entrypoint for the LinkPulse application
- Don't import any modules globally unless you're certain it's necessary. Imports should be tightly controlled.
"""
if args[0] == "serve":
import asyncio
from hypercorn import Config
from hypercorn.asyncio import serve
from linkpulse.app import app
from linkpulse.utilities import is_development
from uvicorn import run
config = Config()
config.use_reloader = True
logger.debug("Invoking uvicorn.run")
run(
"linkpulse.app:app",
reload=is_development,
# Both options are special IP addresses that allow the server to listen on all network interfaces. One is for IPv4, the other for IPv6.
# Railway's private networking requires IPv6, so we must use that in production.
host="0.0.0.0" if is_development else "::",
port=int(os.getenv("PORT", "8000")),
log_config={
"version": 1,
"disable_existing_loggers": False,
"loggers": {
"uvicorn": {"propagate": True},
"uvicorn.access": {"propagate": True},
},
},
)
asyncio.run(serve(app, config))
elif args[0] == "migrate":
from linkpulse.migrate import main
main(*args[1:])
main(*args)
elif args[0] == "repl":
import linkpulse
lp = linkpulse
from linkpulse.app import app, db
import linkpulse
# import most useful objects, models, and functions
lp = linkpulse # alias
from linkpulse.utilities import get_db
from linkpulse.app import app
from linkpulse.models import BaseModel, IPAddress
from bpython import embed
db = get_db()
# start REPL
from bpython import embed # type: ignore
embed(locals())
else:
print("Invalid command: {}".format(args[0]))
raise ValueError("Unexpected command: {}".format(" ".join(args)))
if __name__ == "__main__":
if len(sys.argv) == 1:
logger.debug("Entrypoint", argv=sys.argv)
args = sys.argv[1:]
if len(args) == 0:
logger.debug("No arguments provided, defaulting to 'serve'")
main("serve")
else:
# Check that args after aren't all whitespace
remaining_args = ' '.join(sys.argv[1:]).strip()
if len(remaining_args) > 0:
main(*sys.argv[1:])
normalized_args = " ".join(args).strip()
if len(normalized_args) == 0:
logger.warning("Whitespace arguments provided, defaulting to 'serve'")
logger.debug("Invoking main with arguments", args=args)
main(*args)

View File

@@ -1,35 +1,37 @@
import logging
import os
import random
from collections import defaultdict
from contextlib import asynccontextmanager
from dataclasses import dataclass, field
from datetime import datetime
from datetime import datetime, timezone
from typing import AsyncIterator
import human_readable
import pytz
import structlog
from apscheduler.schedulers.background import BackgroundScheduler # type: ignore
from apscheduler.triggers.interval import IntervalTrigger # type: ignore
from asgi_correlation_id import CorrelationIdMiddleware
from dotenv import load_dotenv
from fastapi import FastAPI, Request, Response, status
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import ORJSONResponse
from fastapi_cache import FastAPICache
from fastapi_cache.backends.inmemory import InMemoryBackend
from fastapi_cache.decorator import cache
import human_readable
from linkpulse.utilities import get_ip, hide_ip, pluralize
from peewee import PostgresqlDatabase
from linkpulse.logging import setup_logging
from linkpulse.middleware import LoggingMiddleware
from linkpulse.utilities import get_db, get_ip, hide_ip, is_development
from psycopg2.extras import execute_values
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
load_dotenv(dotenv_path=".env")
from linkpulse import models, responses # type: ignore
is_development = os.getenv("ENVIRONMENT") == "development"
db: PostgresqlDatabase = models.BaseModel._meta.database
db = get_db()
def flush_ips():
if len(app.state.buffered_updates) == 0:
logger.debug("No IPs to flush to Database")
return
try:
@@ -50,11 +52,11 @@ def flush_ips():
cur = db.cursor()
execute_values(cur, sql, rows)
except:
print("Failed to flush IPs to the database.")
except Exception as e:
logger.error("Failed to flush IPs to Database", error=e)
i = len(app.state.buffered_updates)
print("Flushed {} IP{} to the database.".format(i, pluralize(i)))
logger.debug("IPs written to database", count=i)
# Finish up
app.state.buffered_updates.clear()
@@ -66,18 +68,32 @@ scheduler.add_job(flush_ips, IntervalTrigger(seconds=5))
@asynccontextmanager
async def lifespan(_: FastAPI) -> AsyncIterator[None]:
# Originally, this was used to generate a pool of random IP addresses so we could demo a changing list.
# Now, this isn't necessary, but I just wanna test it for now. It'll be removed pretty soon.
random.seed(42) # 42 is the answer to everything
app.state.ip_pool = [
".".join(str(random.randint(0, 255)) for _ in range(4)) for _ in range(50)
]
# Connect to database, ensure specific tables exist
db.connect()
db.create_tables([models.IPAddress])
# Delete all randomly generated IP addresses
with db.atomic():
logger.info(
"Deleting Randomized IP Addresses", ip_pool_count=len(app.state.ip_pool)
)
query = models.IPAddress.delete().where(
models.IPAddress.ip << app.state.ip_pool
)
row_count = query.execute()
logger.info("Randomized IP Addresses deleted", row_count=row_count)
FastAPICache.init(
backend=InMemoryBackend(), prefix="fastapi-cache", cache_status_header="X-Cache"
)
if is_development:
# 42 is the answer to everything
random.seed(42)
# Generate a pool of random IP addresses
app.state.ip_pool = [
".".join(str(random.randint(0, 255)) for _ in range(4)) for _ in range(50)
]
app.state.buffered_updates = defaultdict(IPCounter)
scheduler.start()
@@ -87,42 +103,39 @@ async def lifespan(_: FastAPI) -> AsyncIterator[None]:
scheduler.shutdown()
flush_ips()
if not db.is_closed():
db.close()
@dataclass
class IPCounter:
# Note: This is not the true 'seen' count, but the count of how many times the IP has been seen since the last flush.
count: int = 0
last_seen: datetime = field(default_factory=datetime.now)
last_seen: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
app = FastAPI(lifespan=lifespan)
app = FastAPI(lifespan=lifespan, default_response_class=ORJSONResponse)
setup_logging()
logger = structlog.get_logger()
if is_development:
origins = [
"http://localhost",
"http://localhost:5173",
]
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_origins=[
"http://localhost",
"http://localhost:5173",
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.on_event("startup")
def startup():
db.connect()
db.create_tables([models.IPAddress])
@app.on_event("shutdown")
def shutdown():
if not db.is_closed():
db.close()
app.add_middleware(LoggingMiddleware)
app.add_middleware(CorrelationIdMiddleware)
@app.get("/health")
@@ -144,25 +157,19 @@ async def get_migration():
return {"name": name, "migrated_at": migrated_at}
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
@app.get("/api/ips")
async def get_ips(request: Request, response: Response):
"""
Returns a list of partially redacted IP addresses, as well as submitting the user's IP address to the database (buffered).
"""
now = datetime.now()
now = datetime.now(timezone.utc)
# Get the user's IP address
user_ip = (
get_ip(request) if not is_development else random.choice(app.state.ip_pool)
)
user_ip = get_ip(request)
# If the IP address is not found, return an error
if user_ip is None:
print("No IP found!")
logger.warning("unable to acquire user IP address")
response.status_code = status.HTTP_403_FORBIDDEN
return {"error": "Unable to handle request."}
@@ -183,7 +190,10 @@ async def get_ips(request: Request, response: Response):
"ips": [
responses.SeenIP(
ip=hide_ip(ip.ip) if ip.ip != user_ip else ip.ip,
last_seen=human_readable.date_time(ip.last_seen),
last_seen=human_readable.date_time(
value=pytz.utc.localize(ip.last_seen),
when=datetime.now(timezone.utc),
),
count=ip.count,
)
for ip in latest_ips

View File

@@ -0,0 +1,165 @@
import logging
import os
import sys
from typing import Any, List, Optional
import structlog
from structlog.types import EventDict, Processor
def decode_bytes(_: Any, __: Any, bs: bytes) -> str:
"""
orjson returns bytes; we need strings
"""
return bs.decode()
def rename_event_key(_: Any, __: Any, event_dict: EventDict) -> EventDict:
"""
Renames the `event` key to `msg`, as Railway expects it in that form.
"""
event_dict["msg"] = event_dict.pop("event")
return event_dict
def drop_color_message_key(_: Any, __: Any, event_dict: EventDict) -> EventDict:
"""
Uvicorn logs the message a second time in the extra `color_message`, but we don't
need it. This processor drops the key from the event dict if it exists.
"""
event_dict.pop("color_message", None)
return event_dict
def setup_logging(
json_logs: Optional[bool] = None, log_level: Optional[str] = None
) -> None:
# Pull from environment variables, apply defaults if not set
json_logs = json_logs or os.getenv("LOG_JSON_FORMAT", "true").lower() == "true"
log_level = log_level or os.getenv("LOG_LEVEL", "INFO")
def flatten(n):
"""
Flattens a nested list into a single list of elements.
"""
match n:
case []:
return []
case [[*hd], *tl]:
return [*flatten(hd), *flatten(tl)]
case [hd, *tl]:
return [hd, *flatten(tl)]
# Shared structlog processors, both for the root logger and foreign loggers
shared_processors: List[Processor] = flatten(
[
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.stdlib.ExtraAdder(),
drop_color_message_key,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
# Processors only used for the JSON renderer
(
[
rename_event_key,
# Format the exception only for JSON logs, as we want to pretty-print them when using the ConsoleRenderer
structlog.processors.format_exc_info,
]
if json_logs
else []
),
]
)
# Main structlog configuration
structlog.configure(
processors=[
*shared_processors,
# Prepare event dict for `ProcessorFormatter`.
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)
log_renderer: structlog.types.Processor
if json_logs:
import orjson
log_renderer = structlog.processors.JSONRenderer(serializer=orjson.dumps)
else:
log_renderer = structlog.dev.ConsoleRenderer()
formatter = structlog.stdlib.ProcessorFormatter(
# These run ONLY on `logging` entries that do NOT originate within structlog.
foreign_pre_chain=shared_processors,
# These run on ALL entries after the pre_chain is done.
processors=[
# Remove _record & _from_structlog.
structlog.stdlib.ProcessorFormatter.remove_processors_meta,
log_renderer,
# required with orjson
*([decode_bytes] if json_logs else []), # type: ignore
],
)
handler = logging.StreamHandler()
# Use OUR `ProcessorFormatter` to format all `logging` entries.
handler.setFormatter(formatter)
root_logger = logging.getLogger()
root_logger.addHandler(handler)
root_logger.setLevel(log_level.upper())
def configure_logger(
name: str,
level: Optional[str] = None,
clear: Optional[bool] = None,
propagate: Optional[bool] = None,
) -> None:
"""Helper function to configure a logger with the given parameters."""
logger = logging.getLogger(name)
if level is not None:
logger.setLevel(level.upper())
if clear is True:
logger.handlers.clear()
if propagate is not None:
logger.propagate = propagate
# Clear the log handlers for uvicorn loggers, and enable propagation
# so the messages are caught by our root logger and formatted correctly
# by structlog
configure_logger("uvicorn", clear=True, propagate=True)
configure_logger("uvicorn.error", clear=True, propagate=True)
# Disable the apscheduler loggers, as they are too verbose
# TODO: This should be configurable easily from a TOML or YAML file
configure_logger("apscheduler.executors.default", level="WARNING")
configure_logger("apscheduler.scheduler", level="WARNING")
# Since we re-create the access logs ourselves, to add all information
# in the structured log (see the `logging_middleware` in main.py), we clear
# the handlers and prevent the logs to propagate to a logger higher up in the
# hierarchy (effectively rendering them silent).
configure_logger("uvicorn.access", clear=True, propagate=False)
def handle_exception(exc_type, exc_value, exc_traceback):
"""
Log any uncaught exception instead of letting it be printed by Python
(but leave KeyboardInterrupt untouched to allow users to Ctrl+C to stop)
See https://stackoverflow.com/a/16993115/3641865
"""
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
root_logger.error(
"Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback)
)
sys.excepthook = handle_exception

View File

@@ -0,0 +1,58 @@
import time
from asgi_correlation_id import correlation_id
import structlog
from linkpulse.utilities import is_development
from fastapi import FastAPI, Request, Response
from starlette.middleware.base import BaseHTTPMiddleware
class LoggingMiddleware(BaseHTTPMiddleware):
def __init__(self, app: FastAPI):
super().__init__(app)
self.access_logger = structlog.get_logger("api.access")
async def dispatch(self, request: Request, call_next) -> Response:
structlog.contextvars.clear_contextvars()
# These context vars will be added to all log entries emitted during the request
request_id = correlation_id.get()
structlog.contextvars.bind_contextvars(request_id=request_id)
start_time = time.perf_counter_ns()
# If the call_next raises an error, we still want to return our own 500 response,
# so we can add headers to it (process time, request ID...)
response = Response(status_code=500)
try:
response = await call_next(request)
except Exception:
# TODO: Validate that we don't swallow exceptions (unit test?)
structlog.stdlib.get_logger("api.error").exception("Uncaught exception")
raise
finally:
process_time_ms = "{:.2f}".format(
(time.perf_counter_ns() - start_time) / 10**6
)
self.access_logger.debug(
"Request",
http={
"url": str(request.url),
"query": dict(request.query_params),
"status_code": response.status_code,
"method": request.method,
"request_id": request_id,
"version": request.scope["http_version"],
},
client=(
{"ip": request.client.host, "port": request.client.port}
if request.client
else None
),
duration_ms=process_time_ms,
)
if is_development:
response.headers["X-Process-Time"] = process_time_ms
return response

View File

@@ -1,23 +1,31 @@
import os
import pkgutil
import re
import sys
from typing import List, Optional, Tuple
import questionary
import structlog
from dotenv import load_dotenv
from peewee import PostgresqlDatabase
from peewee_migrate import Router, router
logger = structlog.get_logger()
load_dotenv(dotenv_path=".env")
class ExtendedRouter(Router):
"""
The original Router class from peewee_migrate didn't have all the functions I needed, so several functions are added here
Added
- show: Show the suggested migration that will be created, without actually creating it
- all_migrations: Get all migrations that have been applied
"""
def show(self, module: str) -> Optional[Tuple[str, str]]:
"""
Show the suggested migration that will be created, without actually creating it.
Show the suggested migration that will be created, without actually creating it
:param module: The module to scan & diff against.
:param module: The module to scan & diff against
"""
migrate = rollback = ""
@@ -56,7 +64,7 @@ class ExtendedRouter(Router):
def all_migrations(self) -> List[str]:
"""
Get all migrations that have been applied.
Get all migrations that have been applied
"""
return [mm.name for mm in self.model.select().order_by(self.model.id)]
@@ -66,38 +74,40 @@ def main(*args: str) -> None:
Main function for running migrations.
Args are fed directly from sys.argv.
"""
from linkpulse.utilities import get_db
from linkpulse import models
db: PostgresqlDatabase = models.BaseModel._meta.database
db = get_db()
router = ExtendedRouter(
database=db,
migrate_dir="linkpulse/migrations",
ignore=[models.BaseModel._meta.table_name],
)
auto = "linkpulse.models"
target_models = "linkpulse.models" # The module to scan for models & changes
current = router.all_migrations()
if len(current) == 0:
diff = router.diff
if len(diff) == 0:
print(
logger.info(
"No migrations found, no pending migrations to apply. Creating initial migration."
)
migration = router.create("initial", auto=auto)
migration = router.create("initial", auto=target_models)
if not migration:
print("No changes detected. Something went wrong.")
logger.error("No changes detected. Something went wrong.")
else:
print(f"Migration created: {migration}")
logger.info(f"Migration created: {migration}")
router.run(migration)
diff = router.diff
if len(diff) > 0:
print(
logger.info(
"Note: Selecting a migration will apply all migrations up to and including the selected migration."
)
print(
logger.info(
"e.g. Applying 004 while only 001 is applied would apply 002, 003, and 004."
)
@@ -105,95 +115,76 @@ def main(*args: str) -> None:
"Select highest migration to apply:", choices=diff
).ask()
if choice is None:
print(
logger.warning(
"For safety reasons, you won't be able to create migrations without applying the pending ones."
)
if len(current) == 0:
print(
logger.warning(
"Warn: No migrations have been applied globally, which is dangerous. Something may be wrong."
)
return
result = router.run(choice)
print(f"Done. Applied migrations: {result}")
print("Warning: You should commit and push any new migrations immediately!")
logger.info(f"Done. Applied migrations: {result}")
logger.warning("You should commit and push any new migrations immediately!")
else:
print("No pending migrations to apply.")
logger.info("No pending migrations to apply.")
# Inspects models and might generate a migration script
migration_available = router.show(target_models)
migration_available = router.show(auto)
if migration_available is not None:
print("A migration is available to be applied:")
logger.info("A migration is available to be applied:")
migrate_text, rollback_text = migration_available
print("MIGRATION:")
for line in migrate_text.split("\n"):
if line.strip() == "":
continue
print("\t" + line)
print("ROLLBACK:")
for line in rollback_text.split("\n"):
if line.strip() == "":
continue
print("\t" + line)
def _reformat_text(text: str) -> str:
# Remove empty lines
text = [line for line in text.split("\n") if line.strip() != ""]
# Add line numbers, indent, ensure it starts on a new line
return "\n" + "\n".join([f"{i:02}:\t{line}" for i, line in enumerate(text)])
logger.info("Migration Content", content=_reformat_text(migrate_text))
logger.info("Rollback Content", content=_reformat_text(rollback_text))
if questionary.confirm("Do you want to create this migration?").ask():
print(
'Lowercase letters and underscores only (e.g. "create_table", "remove_ipaddress_count").'
logger.info(
'Minimum length 9, lowercase letters and underscores only (e.g. "create_table", "remove_ipaddress_count").'
)
migration_name: Optional[str] = questionary.text(
"Enter migration name",
validate=lambda text: re.match("^[a-z_]+$", text) is not None,
validate=lambda text: re.match("^[a-z_]{9,}$", text) is not None,
).ask()
if migration_name is None:
return
migration = router.create(migration_name, auto=auto)
migration = router.create(migration_name, auto=target_models)
if migration:
print(f"Migration created: {migration}")
logger.info(f"Migration created: {migration}")
if len(router.diff) == 1:
if questionary.confirm(
"Do you want to apply this migration immediately?"
).ask():
router.run(migration)
print("Done.")
print("!!! Commit and push this migration file immediately!")
logger.info("Done.")
logger.warning(
"!!! Commit and push this migration file immediately!"
)
else:
print("No changes detected. Something went wrong.")
return
raise RuntimeError(
"Changes anticipated with show() but no migration created with create(), model definition may have reverted."
)
else:
print("No database changes detected.")
logger.info("No database changes detected.")
if len(current) > 5:
if questionary.confirm(
"There are more than 5 migrations applied. Do you want to merge them?",
default=False,
).ask():
print("Merging migrations...")
logger.info("Merging migrations...")
router.merge(name="initial")
print("Done.")
logger.info("Done.")
print("!!! Commit and push this merged migration file immediately!")
# Testing Code:
"""
print(router.print('linkpulse.models'))
# Create migration
print("Creating migration")
migration = router.create('test', auto='linkpulse.models')
if migration is None:
print("No changes detected")
else:
print(f"Migration Created: {migration}")
# Run migration/migrations
router.run(migration)
Run all unapplied migrations
print("Running all unapplied migrations")
applied = router.run()
print(f"Applied migrations: {applied}")
"""
logger.warning("Commit and push this merged migration file immediately!")

View File

@@ -1,13 +1,32 @@
from peewee import Model, CharField, DateTimeField, IntegerField
"""models.py
This module defines the database models for the LinkPulse backend.
It also provides a base model with database connection details.
"""
from os import getenv
import structlog
from peewee import CharField, DateTimeField, IntegerField, Model
from playhouse.db_url import connect
from os import environ
logger = structlog.get_logger()
# I can't pollute the class definition with these lines, so I'll move them to a separate function.
def _get_database_url():
url = getenv("DATABASE_URL")
if url is None or url.strip() == "":
raise ValueError("DATABASE_URL is not set")
return url
class BaseModel(Model):
class Meta:
database = connect(url=environ.get('DATABASE_URL'))
# accessed via `BaseModel._meta.database`
database = connect(url=_get_database_url())
class IPAddress(BaseModel):
ip = CharField(primary_key=True)
last_seen = DateTimeField()
count = IntegerField(default=0)
last_seen = DateTimeField() # timezone naive
count = IntegerField(default=0)

View File

@@ -1,8 +1,12 @@
"""responses.py
This module contains the response models for the FastAPI application.
"""
from pydantic import BaseModel
from datetime import datetime
class SeenIP(BaseModel):
ip: str
last_seen: str
count: int
count: int

View File

@@ -1,37 +1,63 @@
"""utilities.py
This module provides utility functions for database connection, string manipulation, and IP address handling.
"""
import os
from typing import Optional
from fastapi import Request
from peewee import PostgresqlDatabase
# globally referenced
is_development = os.getenv("ENVIRONMENT") == "development"
def pluralize(count: int) -> str:
def get_db() -> PostgresqlDatabase:
"""
Acquires the database connector from the BaseModel class.
This is not a cursor, but a connection to the database.
"""
# Might not be necessary, but I'd prefer to not import heavy modules with side effects in a utility module.
from linkpulse import models
return models.BaseModel._meta.database # type: ignore
def pluralize(count: int, word: Optional[str] = None) -> str:
"""
Pluralize a word based on count. Returns 's' if count is not 1, '' (empty string) otherwise.
"""
return 's' if count != 1 else ''
if word:
return word + "s" if count != 1 else word
return "s" if count != 1 else ""
def get_ip(request: Request) -> Optional[str]:
"""
This function attempts to retrieve the client's IP address from the request headers.
It first checks the 'X-Forwarded-For' header, which is commonly used in proxy setups.
If the header is present, it returns the first IP address in the list.
If the header is not present, it falls back to the client's direct connection IP address.
If neither is available, it returns None.
Args:
request (Request): The request object containing headers and client information.
Returns:
Optional[str]: The client's IP address if available, otherwise None.
"""
x_forwarded_for = request.headers.get('X-Forwarded-For')
x_forwarded_for = request.headers.get("X-Forwarded-For")
if x_forwarded_for:
return x_forwarded_for.split(',')[0]
return x_forwarded_for.split(",")[0]
if request.client:
return request.client.host
return None
def hide_ip(ip: str, hidden_octets: Optional[int] = None) -> str:
"""
Hide the last octet(s) of an IP address.
@@ -46,26 +72,33 @@ def hide_ip(ip: str, hidden_octets: Optional[int] = None) -> str:
Examples:
>>> hide_ip("192.168.1.1")
'192.168.1.X'
>>> hide_ip("192.168.1.1", 2)
'192.168.X.X'
>>> hide_ip("2001:0db8:85a3:0000:0000:8a2e:0370:7334")
'2001:0db8:85a3:0000:0000:XXXX:XXXX:XXXX'
>>> hide_ip("2001:0db8:85a3:0000:0000:8a2e:0370:7334", 4)
'2001:0db8:85a3:0000:XXXX:XXXX:XXXX:XXXX'
"""
ipv6 = ':' in ip
ipv6 = ":" in ip
# Make sure that IPv4 (dot) and IPv6 (colon) addresses are not mixed together somehow. Not a comprehensive check.
if ipv6 == ('.' in ip):
if ipv6 == ("." in ip):
raise ValueError("Invalid IP address format. Must be either IPv4 or IPv6.")
# Secondary check, if the IP address is an IPv6 address with unspecified address (::), return it as is.
if ipv6 and ip.startswith("::"):
return ip
total_octets = 8 if ipv6 else 4
separator = ':' if ipv6 else '.'
replacement = 'XXXX' if ipv6 else 'X'
separator = ":" if ipv6 else "."
replacement = "XXXX" if ipv6 else "X"
if hidden_octets is None:
hidden_octets = 3 if ipv6 else 1
return separator.join(ip.split(separator, total_octets - hidden_octets)[:-1]) + (separator + replacement) * hidden_octets
return (
separator.join(ip.split(separator, total_octets - hidden_octets)[:-1])
+ (separator + replacement) * hidden_octets
)

View File

@@ -1,2 +1,3 @@
[variables]
# Otherwise, Poetry will use a very old & incompatible version, 1.3.1
NIXPACKS_POETRY_VERSION='1.8.4'

211
backend/poetry.lock generated
View File

@@ -70,6 +70,24 @@ tornado = ["tornado (>=4.3)"]
twisted = ["twisted"]
zookeeper = ["kazoo"]
[[package]]
name = "asgi-correlation-id"
version = "4.3.4"
description = "Middleware correlating project logs to individual requests"
optional = false
python-versions = "<4.0,>=3.8"
files = [
{file = "asgi_correlation_id-4.3.4-py3-none-any.whl", hash = "sha256:36ce69b06c7d96b4acb89c7556a4c4f01a972463d3d49c675026cbbd08e9a0a2"},
{file = "asgi_correlation_id-4.3.4.tar.gz", hash = "sha256:ea6bc310380373cb9f731dc2e8b2b6fb978a76afe33f7a2384f697b8d6cd811d"},
]
[package.dependencies]
packaging = "*"
starlette = ">=0.18"
[package.extras]
celery = ["celery"]
[[package]]
name = "blessed"
version = "1.20.0"
@@ -460,32 +478,6 @@ files = [
{file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"},
]
[[package]]
name = "h2"
version = "4.1.0"
description = "HTTP/2 State-Machine based protocol implementation"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "h2-4.1.0-py3-none-any.whl", hash = "sha256:03a46bcf682256c95b5fd9e9a99c1323584c3eec6440d379b9903d709476bc6d"},
{file = "h2-4.1.0.tar.gz", hash = "sha256:a83aca08fbe7aacb79fec788c9c0bac936343560ed9ec18b82a13a12c28d2abb"},
]
[package.dependencies]
hpack = ">=4.0,<5"
hyperframe = ">=6.0,<7"
[[package]]
name = "hpack"
version = "4.0.0"
description = "Pure-Python HPACK header compression"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "hpack-4.0.0-py3-none-any.whl", hash = "sha256:84a076fad3dc9a9f8063ccb8041ef100867b1878b25ef0ee63847a5d53818a6c"},
{file = "hpack-4.0.0.tar.gz", hash = "sha256:fc41de0c63e687ebffde81187a948221294896f6bdc0ae2312708df339430095"},
]
[[package]]
name = "human-readable"
version = "1.3.4"
@@ -497,40 +489,6 @@ files = [
{file = "human_readable-1.3.4.tar.gz", hash = "sha256:5726eac89066ec25d14447a173e645a855184645d024eb306705e2bfbb60f0c0"},
]
[[package]]
name = "hypercorn"
version = "0.14.4"
description = "A ASGI Server based on Hyper libraries and inspired by Gunicorn"
optional = false
python-versions = ">=3.7"
files = [
{file = "hypercorn-0.14.4-py3-none-any.whl", hash = "sha256:f956200dbf8677684e6e976219ffa6691d6cf795281184b41dbb0b135ab37b8d"},
{file = "hypercorn-0.14.4.tar.gz", hash = "sha256:3fa504efc46a271640023c9b88c3184fd64993f47a282e8ae1a13ccb285c2f67"},
]
[package.dependencies]
h11 = "*"
h2 = ">=3.1.0"
priority = "*"
wsproto = ">=0.14.0"
[package.extras]
docs = ["pydata_sphinx_theme"]
h3 = ["aioquic (>=0.9.0,<1.0)"]
trio = ["exceptiongroup (>=1.1.0)", "trio (>=0.22.0)"]
uvloop = ["uvloop"]
[[package]]
name = "hyperframe"
version = "6.0.1"
description = "HTTP/2 framing layer for Python"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "hyperframe-6.0.1-py3-none-any.whl", hash = "sha256:0ec6bafd80d8ad2195c4f03aacba3a8265e57bc4cff261e802bf39970ed02a15"},
{file = "hyperframe-6.0.1.tar.gz", hash = "sha256:ae510046231dc8e9ecb1a6586f63d2347bf4c8905914aa84ba585ae85f28a914"},
]
[[package]]
name = "idna"
version = "3.10"
@@ -573,6 +531,84 @@ files = [
[package.dependencies]
psutil = "*"
[[package]]
name = "orjson"
version = "3.10.10"
description = "Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy"
optional = false
python-versions = ">=3.8"
files = [
{file = "orjson-3.10.10-cp310-cp310-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:b788a579b113acf1c57e0a68e558be71d5d09aa67f62ca1f68e01117e550a998"},
{file = "orjson-3.10.10-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:804b18e2b88022c8905bb79bd2cbe59c0cd014b9328f43da8d3b28441995cda4"},
{file = "orjson-3.10.10-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9972572a1d042ec9ee421b6da69f7cc823da5962237563fa548ab17f152f0b9b"},
{file = "orjson-3.10.10-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc6993ab1c2ae7dd0711161e303f1db69062955ac2668181bfdf2dd410e65258"},
{file = "orjson-3.10.10-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d78e4cacced5781b01d9bc0f0cd8b70b906a0e109825cb41c1b03f9c41e4ce86"},
{file = "orjson-3.10.10-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e6eb2598df518281ba0cbc30d24c5b06124ccf7e19169e883c14e0831217a0bc"},
{file = "orjson-3.10.10-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:23776265c5215ec532de6238a52707048401a568f0fa0d938008e92a147fe2c7"},
{file = "orjson-3.10.10-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8cc2a654c08755cef90b468ff17c102e2def0edd62898b2486767204a7f5cc9c"},
{file = "orjson-3.10.10-cp310-none-win32.whl", hash = "sha256:081b3fc6a86d72efeb67c13d0ea7c030017bd95f9868b1e329a376edc456153b"},
{file = "orjson-3.10.10-cp310-none-win_amd64.whl", hash = "sha256:ff38c5fb749347768a603be1fb8a31856458af839f31f064c5aa74aca5be9efe"},
{file = "orjson-3.10.10-cp311-cp311-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:879e99486c0fbb256266c7c6a67ff84f46035e4f8749ac6317cc83dacd7f993a"},
{file = "orjson-3.10.10-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:019481fa9ea5ff13b5d5d95e6fd5ab25ded0810c80b150c2c7b1cc8660b662a7"},
{file = "orjson-3.10.10-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0dd57eff09894938b4c86d4b871a479260f9e156fa7f12f8cad4b39ea8028bb5"},
{file = "orjson-3.10.10-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dbde6d70cd95ab4d11ea8ac5e738e30764e510fc54d777336eec09bb93b8576c"},
{file = "orjson-3.10.10-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b2625cb37b8fb42e2147404e5ff7ef08712099197a9cd38895006d7053e69d6"},
{file = "orjson-3.10.10-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dbf3c20c6a7db69df58672a0d5815647ecf78c8e62a4d9bd284e8621c1fe5ccb"},
{file = "orjson-3.10.10-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:75c38f5647e02d423807d252ce4528bf6a95bd776af999cb1fb48867ed01d1f6"},
{file = "orjson-3.10.10-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:23458d31fa50ec18e0ec4b0b4343730928296b11111df5f547c75913714116b2"},
{file = "orjson-3.10.10-cp311-none-win32.whl", hash = "sha256:2787cd9dedc591c989f3facd7e3e86508eafdc9536a26ec277699c0aa63c685b"},
{file = "orjson-3.10.10-cp311-none-win_amd64.whl", hash = "sha256:6514449d2c202a75183f807bc755167713297c69f1db57a89a1ef4a0170ee269"},
{file = "orjson-3.10.10-cp312-cp312-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:8564f48f3620861f5ef1e080ce7cd122ee89d7d6dacf25fcae675ff63b4d6e05"},
{file = "orjson-3.10.10-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5bf161a32b479034098c5b81f2608f09167ad2fa1c06abd4e527ea6bf4837a9"},
{file = "orjson-3.10.10-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:68b65c93617bcafa7f04b74ae8bc2cc214bd5cb45168a953256ff83015c6747d"},
{file = "orjson-3.10.10-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e8e28406f97fc2ea0c6150f4c1b6e8261453318930b334abc419214c82314f85"},
{file = "orjson-3.10.10-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e4d0d9fe174cc7a5bdce2e6c378bcdb4c49b2bf522a8f996aa586020e1b96cee"},
{file = "orjson-3.10.10-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b3be81c42f1242cbed03cbb3973501fcaa2675a0af638f8be494eaf37143d999"},
{file = "orjson-3.10.10-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:65f9886d3bae65be026219c0a5f32dbbe91a9e6272f56d092ab22561ad0ea33b"},
{file = "orjson-3.10.10-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:730ed5350147db7beb23ddaf072f490329e90a1d059711d364b49fe352ec987b"},
{file = "orjson-3.10.10-cp312-none-win32.whl", hash = "sha256:a8f4bf5f1c85bea2170800020d53a8877812892697f9c2de73d576c9307a8a5f"},
{file = "orjson-3.10.10-cp312-none-win_amd64.whl", hash = "sha256:384cd13579a1b4cd689d218e329f459eb9ddc504fa48c5a83ef4889db7fd7a4f"},
{file = "orjson-3.10.10-cp313-cp313-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:44bffae68c291f94ff5a9b4149fe9d1bdd4cd0ff0fb575bcea8351d48db629a1"},
{file = "orjson-3.10.10-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e27b4c6437315df3024f0835887127dac2a0a3ff643500ec27088d2588fa5ae1"},
{file = "orjson-3.10.10-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bca84df16d6b49325a4084fd8b2fe2229cb415e15c46c529f868c3387bb1339d"},
{file = "orjson-3.10.10-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c14ce70e8f39bd71f9f80423801b5d10bf93d1dceffdecd04df0f64d2c69bc01"},
{file = "orjson-3.10.10-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:24ac62336da9bda1bd93c0491eff0613003b48d3cb5d01470842e7b52a40d5b4"},
{file = "orjson-3.10.10-cp313-none-win32.whl", hash = "sha256:eb0a42831372ec2b05acc9ee45af77bcaccbd91257345f93780a8e654efc75db"},
{file = "orjson-3.10.10-cp313-none-win_amd64.whl", hash = "sha256:f0c4f37f8bf3f1075c6cc8dd8a9f843689a4b618628f8812d0a71e6968b95ffd"},
{file = "orjson-3.10.10-cp38-cp38-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:829700cc18503efc0cf502d630f612884258020d98a317679cd2054af0259568"},
{file = "orjson-3.10.10-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0ceb5e0e8c4f010ac787d29ae6299846935044686509e2f0f06ed441c1ca949"},
{file = "orjson-3.10.10-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0c25908eb86968613216f3db4d3003f1c45d78eb9046b71056ca327ff92bdbd4"},
{file = "orjson-3.10.10-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:218cb0bc03340144b6328a9ff78f0932e642199ac184dd74b01ad691f42f93ff"},
{file = "orjson-3.10.10-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e2277ec2cea3775640dc81ab5195bb5b2ada2fe0ea6eee4677474edc75ea6785"},
{file = "orjson-3.10.10-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:848ea3b55ab5ccc9d7bbd420d69432628b691fba3ca8ae3148c35156cbd282aa"},
{file = "orjson-3.10.10-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:e3e67b537ac0c835b25b5f7d40d83816abd2d3f4c0b0866ee981a045287a54f3"},
{file = "orjson-3.10.10-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:7948cfb909353fce2135dcdbe4521a5e7e1159484e0bb024c1722f272488f2b8"},
{file = "orjson-3.10.10-cp38-none-win32.whl", hash = "sha256:78bee66a988f1a333dc0b6257503d63553b1957889c17b2c4ed72385cd1b96ae"},
{file = "orjson-3.10.10-cp38-none-win_amd64.whl", hash = "sha256:f1d647ca8d62afeb774340a343c7fc023efacfd3a39f70c798991063f0c681dd"},
{file = "orjson-3.10.10-cp39-cp39-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:5a059afddbaa6dd733b5a2d76a90dbc8af790b993b1b5cb97a1176ca713b5df8"},
{file = "orjson-3.10.10-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f9b5c59f7e2a1a410f971c5ebc68f1995822837cd10905ee255f96074537ee6"},
{file = "orjson-3.10.10-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d5ef198bafdef4aa9d49a4165ba53ffdc0a9e1c7b6f76178572ab33118afea25"},
{file = "orjson-3.10.10-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aaf29ce0bb5d3320824ec3d1508652421000ba466abd63bdd52c64bcce9eb1fa"},
{file = "orjson-3.10.10-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dddd5516bcc93e723d029c1633ae79c4417477b4f57dad9bfeeb6bc0315e654a"},
{file = "orjson-3.10.10-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a12f2003695b10817f0fa8b8fca982ed7f5761dcb0d93cff4f2f9f6709903fd7"},
{file = "orjson-3.10.10-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:672f9874a8a8fb9bb1b771331d31ba27f57702c8106cdbadad8bda5d10bc1019"},
{file = "orjson-3.10.10-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:1dcbb0ca5fafb2b378b2c74419480ab2486326974826bbf6588f4dc62137570a"},
{file = "orjson-3.10.10-cp39-none-win32.whl", hash = "sha256:d9bbd3a4b92256875cb058c3381b782649b9a3c68a4aa9a2fff020c2f9cfc1be"},
{file = "orjson-3.10.10-cp39-none-win_amd64.whl", hash = "sha256:766f21487a53aee8524b97ca9582d5c6541b03ab6210fbaf10142ae2f3ced2aa"},
{file = "orjson-3.10.10.tar.gz", hash = "sha256:37949383c4df7b4337ce82ee35b6d7471e55195efa7dcb45ab8226ceadb0fe3b"},
]
[[package]]
name = "packaging"
version = "24.1"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
files = [
{file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"},
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"},
]
[[package]]
name = "peewee"
version = "3.17.7"
@@ -697,17 +733,6 @@ tzdata = ">=2020.1"
[package.extras]
test = ["time-machine (>=2.6.0)"]
[[package]]
name = "priority"
version = "2.0.0"
description = "A pure-Python implementation of the HTTP/2 priority tree"
optional = false
python-versions = ">=3.6.1"
files = [
{file = "priority-2.0.0-py3-none-any.whl", hash = "sha256:6f8eefce5f3ad59baf2c080a664037bb4725cd0a790d53d59ab4059288faf6aa"},
{file = "priority-2.0.0.tar.gz", hash = "sha256:c965d54f1b8d0d0b19479db3924c7c36cf672dbf2aec92d43fbdaf4492ba18c0"},
]
[[package]]
name = "prompt-toolkit"
version = "3.0.36"
@@ -1030,6 +1055,23 @@ anyio = ">=3.4.0,<5"
[package.extras]
full = ["httpx (>=0.22.0)", "itsdangerous", "jinja2", "python-multipart", "pyyaml"]
[[package]]
name = "structlog"
version = "24.4.0"
description = "Structured Logging for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "structlog-24.4.0-py3-none-any.whl", hash = "sha256:597f61e80a91cc0749a9fd2a098ed76715a1c8a01f73e336b746504d1aad7610"},
{file = "structlog-24.4.0.tar.gz", hash = "sha256:b27bfecede327a6d2da5fbc96bd859f114ecc398a6389d664f62085ee7ae6fc4"},
]
[package.extras]
dev = ["freezegun (>=0.2.8)", "mypy (>=1.4)", "pretend", "pytest (>=6.0)", "pytest-asyncio (>=0.17)", "rich", "simplejson", "twisted"]
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-mermaid", "sphinxext-opengraph", "twisted"]
tests = ["freezegun (>=0.2.8)", "pretend", "pytest (>=6.0)", "pytest-asyncio (>=0.17)", "simplejson"]
typing = ["mypy (>=1.4)", "rich", "twisted"]
[[package]]
name = "types-peewee"
version = "3.17.7.20241017"
@@ -1052,6 +1094,17 @@ files = [
{file = "types_psycopg2-2.9.21.20241019-py3-none-any.whl", hash = "sha256:44d091e67732d16a941baae48cd7b53bf91911bc36888652447cf1ef0c1fb3f6"},
]
[[package]]
name = "types-pytz"
version = "2024.2.0.20241003"
description = "Typing stubs for pytz"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-pytz-2024.2.0.20241003.tar.gz", hash = "sha256:575dc38f385a922a212bac00a7d6d2e16e141132a3c955078f4a4fd13ed6cb44"},
{file = "types_pytz-2024.2.0.20241003-py3-none-any.whl", hash = "sha256:3e22df1336c0c6ad1d29163c8fda82736909eb977281cb823c57f8bae07118b7"},
]
[[package]]
name = "typing-extensions"
version = "4.12.2"
@@ -1137,21 +1190,7 @@ files = [
{file = "wcwidth-0.2.13.tar.gz", hash = "sha256:72ea0c06399eb286d978fdedb6923a9eb47e1c486ce63e9b4e64fc18303972b5"},
]
[[package]]
name = "wsproto"
version = "1.2.0"
description = "WebSockets state-machine based protocol implementation"
optional = false
python-versions = ">=3.7.0"
files = [
{file = "wsproto-1.2.0-py3-none-any.whl", hash = "sha256:b9acddd652b585d75b20477888c56642fdade28bdfd3579aa24a4d2c037dd736"},
{file = "wsproto-1.2.0.tar.gz", hash = "sha256:ad565f26ecb92588a3e43bc3d96164de84cd9902482b130d0ddbaa9664a85065"},
]
[package.dependencies]
h11 = ">=0.9.0,<1"
[metadata]
lock-version = "2.0"
python-versions = "^3.12"
content-hash = "e69fd1560f0fe7e4c5a4c64918fb7c9dab13a3f76a37b92756d12c06c40a466e"
content-hash = "674210864455c4a103c7e78f9879c0360fcdc0ae62d36a2fe44f1df4f59f04e6"

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "linkpulse"
version = "0.1.0"
version = "0.2.2"
description = ""
authors = ["Xevion <xevion@xevion.dev>"]
license = "GNU GPL v3"
@@ -13,7 +13,6 @@ app = "linkpulse"
[tool.poetry.dependencies]
python = "^3.12"
fastapi = "0.100"
Hypercorn = "0.14.4"
python-dotenv = "^1.0.1"
peewee = "^3.17.7"
peewee-migrate = "^1.13.0"
@@ -24,11 +23,16 @@ questionary = "^2.0.1"
apscheduler = "^3.10.4"
human-readable = "^1.3.4"
psycopg2 = "^2.9.10"
structlog = "^24.4.0"
uvicorn = "^0.32.0"
asgi-correlation-id = "^4.3.4"
orjson = "^3.10.10"
[tool.poetry.group.dev.dependencies]
memory-profiler = "^0.61.0"
bpython = "^0.24"
types-pytz = "^2024.2.0.20241003"
[build-system]
requires = ["poetry-core"]

View File

@@ -4,6 +4,6 @@
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "hypercorn linkpulse.app:app --bind \"[::]:$PORT\""
"startCommand": "python3 -m linkpulse serve"
}
}

View File

@@ -1,3 +1,62 @@
#!/usr/bin/env bash
poetry run hypercorn linkpulse.app:app --reload
# Check whether CWD is 'backend'
if [ "$(basename "$(pwd)")" != "backend" ]; then
echo "error: This script must be run from the 'backend' directory."
exit 1
fi
# Default to development mode if not defined
export ENVIRONMENT=${ENVIRONMENT:-development}
COMMAND='poetry run python3 -m linkpulse'
# Check if Railway CLI is available
RAILWAY_AVAILABLE=false
if command -v railway &>/dev/null; then
RAILWAY_AVAILABLE=true
fi
# Check if .env file exists
ENV_FILE_EXISTS=false
if [ -f .env ]; then
ENV_FILE_EXISTS=true
fi
# Check if DATABASE_URL is defined
DATABASE_DEFINED=false
if [ -n "$DATABASE_URL" ]; then
DATABASE_DEFINED=true
else
if $ENV_FILE_EXISTS; then
if grep -E '^DATABASE_URL=.+' .env &>/dev/null; then
DATABASE_DEFINED=true
fi
fi
fi
# Check if Railway project is linked
PROJECT_LINKED=false
if $RAILWAY_AVAILABLE; then
if railway status &>/dev/null; then
PROJECT_LINKED=true
fi
fi
if $DATABASE_DEFINED; then
$COMMAND $@
else
if $RAILWAY_AVAILABLE; then
if $PROJECT_LINKED; then
DATABASE_URL="$(railway variables -s Postgres --json | jq .DATABASE_PUBLIC_URL -cMr)" $COMMAND $@
else
echo "error: Railway project not linked."
echo "Run 'railway link' to link the project."
exit 1
fi
else
echo "error: Could not find DATABASE_URL environment variable."
echo "Install the Railway CLI and link the project, or create a .env file with a DATABASE_URL variable."
exit 1
fi
fi

1
frontend/.env.example Normal file
View File

@@ -0,0 +1 @@
VITE_BACKEND_TARGET=

View File

@@ -1 +0,0 @@
nodejs 22.9.0

View File

@@ -1,3 +1,10 @@
#!/usr/bin/env bash
# Check whether CWD is 'frontend'
if [ "$(basename "$(pwd)")" != "frontend" ]; then
echo "error: This script must be run from the 'frontend' directory."
exit 1
fi
export VITE_BACKEND_TARGET=${VITE_BACKEND_TARGET:-localhost:8000}
pnpm run dev

View File

@@ -66,17 +66,22 @@ export default function App() {
<div className="relative overflow-x-auto">
<table className="w-full text-left text-sm text-gray-500 rtl:text-right dark:text-gray-300">
<tbody>
{seenIps.map((ip) => (
<tr key={ip.ip} className="border-b last:border-0 bg-white dark:border-neutral-700 dark:bg-neutral-800">
<td className="py-4">
<Code>{ip.ip}</Code>
</td>
<td className="py-4">
{ip.count} time{ip.count > 1 ? 's' : ''}
</td>
<td className="py-4">{ip.last_seen}</td>
</tr>
))}
{error == null
? seenIps.map((ip) => (
<tr
key={ip.ip}
className="border-b bg-white last:border-0 dark:border-neutral-700 dark:bg-neutral-800"
>
<td className="py-4">
<Code>{ip.ip}</Code>
</td>
<td className="py-4">
{ip.count} time{ip.count > 1 ? 's' : ''}
</td>
<td className="py-4">{ip.last_seen}</td>
</tr>
))
: null}
</tbody>
</table>
</div>