diff --git a/README.md b/README.md
deleted file mode 100644
index 5c8d4c5..0000000
--- a/README.md
+++ /dev/null
@@ -1,389 +0,0 @@
-# RedFlag
-
-> **π¨ BREAKING CHANGES IN v0.1.23 - READ THIS FIRST**
->
-> **β οΈ ALPHA SOFTWARE - NOT READY FOR PRODUCTION**
->
-> This is experimental software in active development. Features may be broken, bugs are expected, and breaking changes happen frequently. Use at your own risk, preferably on test systems only. Seriously, don't put this in production yet.
-
-**Self-hosted update management for homelabs**
-
-Cross-platform agents β’ Web dashboard β’ Single binary deployment β’ No enterprise BS
-No MacOS yet - need real hardware, not hackintosh hopes and prayers
-
-```
-v0.1.23 - BREAKING CHANGES RELEASE
-```
-
-**Latest:** Complete rearchitecture with security hardening, multi-subsystem support, and real metrics. **This is NOT a simple update** - see [Breaking Changes](#breaking-changes) below.
-
----
-
-## What It Does
-
-RedFlag lets you manage software updates across all your servers from one dashboard. Track pending updates, approve installs, and monitor system health without SSHing into every machine.
-
-**Supported Platforms:**
-- Linux (APT, DNF, Docker)
-- Windows (Windows Update, Winget)
-- Future: Proxmox integration planned
-
-**Built With:**
-- Go backend + PostgreSQL
-- React dashboard
-- Pull-based agents (firewall-friendly)
-- JWT auth with refresh tokens
-
----
-
-## Screenshots
-
-| Dashboard | Agent Details | Update Management |
-|-----------|---------------|-------------------|
-|  |  |  |
-
-| Live Operations | History Tracking | Docker Integration |
-|-----------------|------------------|-------------------|
-|  |  |  |
-
-
-More Screenshots (click to expand)
-
-| Heartbeat System | Registration Tokens | Settings Page |
-|------------------|---------------------|---------------|
-|  |  |  |
-
-| Linux Update Details | Linux Health Details | Agent List |
-|---------------------|----------------------|------------|
-|  |  |  |
-
-| Linux Update History | Windows Agent Details | Windows Update History |
-|---------------------|----------------------|------------------------|
-|  |  |  |
-
-
-
----
-
-## π¨ Breaking Changes & Automatic Migration (v0.1.23)
-
-**THIS IS NOT A SIMPLE UPDATE** - This version introduces a complete rearchitecture from a monolithic to a multi-subsystem security architecture. However, we've built a comprehensive migration system to handle the upgrade for you.
-
-### **What Changed**
-- **Security**: Machine binding enforcement (v0.1.22+ minimum), Ed25519 signing required.
-- **Architecture**: Single scan β Multi-subsystem (storage, system, docker, packages).
-- **Paths**: The agent now uses `/etc/redflag/` and `/var/lib/redflag/`. The migration system will move your old files from `/etc/aggregator/` and `/var/lib/aggregator/`.
-- **Database**: The server now uses separate tables for metrics, docker images, and storage metrics.
-- **UI**: New approval/reject workflow, real security metrics, and a frosted glass design.
-
-### **Automatic Migration**
-The agent now includes an automatic migration system that will run on the first start after the upgrade. Here's how it works:
-
-1. **Detection**: The agent will detect your old installation (`/etc/aggregator`, old config version).
-2. **Backup**: It will create a timestamped backup of your old configuration and state in `/etc/redflag.backup.{timestamp}/`.
-3. **Migration**: It will move your files to the new paths (`/etc/redflag/`, `/var/lib/redflag/`), update your configuration file to the latest version, and enable the new security features.
-4. **Validation**: The agent will validate the migration and then start normally.
-
-**What you need to do:**
-
-- **Run the agent with elevated privileges (sudo) for the first run after the upgrade.** The migration process needs root access to move files and create backups in `/etc/`.
-- That's it. The agent will handle the rest.
-
-### **Manual Intervention (Only if something goes wrong)**
-If the automatic migration fails, you can find a backup of your old configuration in `/etc/redflag.backup.{timestamp}/`. You can then manually restore your old setup and report the issue.
-
-**Need Migration Help?**
-If you run into any issues with the automatic migration, join our Discord server and ask for help.
-
----
-
-## Quick Start
-
-### Server Deployment (Docker)
-
-```bash
-# Clone and configure
-git clone https://github.com/Fimeg/RedFlag.git
-cd RedFlag
-cp config/.env.bootstrap.example config/.env
-docker-compose build
-docker-compose up -d
-
-# Access web UI and run setup
-open http://localhost:3000
-# Follow setup wizard to:
-# - Generate Ed25519 signing keys (CRITICAL for agent updates)
-# - Configure database and admin settings
-# - Copy generated .env content to config/.env
-
-# Restart server to use new configuration and signing keys
-docker-compose down
-docker-compose up -d
-```
-
----
-
-### Agent Installation
-
-**Linux (one-liner):**
-```bash
-curl -sfL https://your-server.com/install | sudo bash -s -- your-registration-token
-```
-
-**Windows (PowerShell):**
-```powershell
-iwr https://your-server.com/install.ps1 | iex
-```
-
-**Manual installation:**
-```bash
-# Download agent binary
-wget https://your-server.com/download/linux/amd64/redflag-agent
-
-# Register and install
-chmod +x redflag-agent
-sudo ./redflag-agent --server https://your-server.com --token your-token --register
-```
-
-Get registration tokens from the web dashboard under **Settings β Token Management**.
-
----
-
-### Updating
-
-To update to the latest version:
-
-```bash
-git pull && docker-compose down && docker-compose build --no-cache && docker-compose up -d
-```
-
----
-
-
-Full Reinstall (Nuclear Option)
-
-If things get really broken or you want to start completely fresh:
-
-```bash
-docker-compose down -v --remove-orphans && \
- rm config/.env && \
- docker-compose build --no-cache && \
- cp config/.env.bootstrap.example config/.env && \
- docker-compose up -d
-```
-
-**What this does:**
-- `down -v` - Stops containers and **wipes all data** (including the database)
-- `--remove-orphans` - Cleans up leftover containers
-- `rm config/.env` - Removes old server config
-- `build --no-cache` - Rebuilds images from scratch
-- `cp config/.env.bootstrap.example` - Resets to bootstrap mode for setup wizard
-- `up -d` - Starts fresh in background
-
-**Warning:** This deletes everything - all agents, update history, configurations. You'll need to handle existing agents:
-
-**Option 1 - Re-register agents:**
-- Remove ALL agent config:
- - `sudo rm /etc/aggregator/config.json` (old path)
- - `sudo rm -rf /etc/redflag/` (new path)
- - `sudo rm -rf /var/lib/aggregator/` (old state)
- - `sudo rm -rf /var/lib/redflag/` (new state)
- - `C:\ProgramData\RedFlag\config.json` (Windows)
-- Re-run the one-liner installer with new registration token
-- Scripts handle override/update automatically (one agent per OS install)
-
-**Option 2 - Clean uninstall/reinstall:**
-- Uninstall agent completely first
-- Then run installer with new token
-
-
-
----
-
-
-Full Uninstall
-
-**Uninstall Server:**
-```bash
-docker-compose down -v --remove-orphans
-rm config/.env
-```
-
-**Uninstall Linux Agent:**
-```bash
-# Using uninstall script (recommended)
-sudo bash aggregator-agent/uninstall.sh
-
-# Remove ALL agent configuration (old and new paths)
-sudo rm /etc/aggregator/config.json
-sudo rm -rf /etc/redflag/
-sudo rm -rf /var/lib/aggregator/
-sudo rm -rf /var/lib/redflag/
-
-# Remove agent user (optional - preserves logs)
-sudo userdel -r redflag-agent
-```
-
-**Uninstall Windows Agent:**
-```powershell
-# Stop and remove service
-Stop-Service RedFlagAgent
-sc.exe delete RedFlagAgent
-
-# Remove files
-Remove-Item "C:\Program Files\RedFlag\redflag-agent.exe"
-Remove-Item "C:\ProgramData\RedFlag\config.json"
-```
-
-
-
----
-
-## Key Features
-
-β **Secure by Default** - Registration tokens, JWT auth, rate limiting
-β **Idempotent Installs** - Re-running installers won't create duplicate agents
-β **Real-time Heartbeat** - Interactive operations with rapid polling
-β **Dependency Handling** - Dry-run checks before installing updates
-β **Multi-seat Tokens** - One token can register multiple agents
-β **Audit Trails** - Complete history of all operations
-β **Proxy Support** - HTTP/HTTPS/SOCKS5 for restricted networks
-β **Native Services** - systemd on Linux, Windows Services on Windows
-β **Ed25519 Signing** - Cryptographic signatures for agent updates (v0.1.22+)
-β **Machine Binding** - Hardware fingerprint enforcement prevents agent spoofing
-β **Real Security Metrics** - Actual database-driven security monitoring
-
----
-
-## Architecture
-
-```
-βββββββββββββββββββ
-β Web Dashboard β React + TypeScript
-β Port: 3000 β
-ββββββββββ¬βββββββββ
- β HTTPS + JWT Auth
-ββββββββββΌβββββββββ
-β Server (Go) β PostgreSQL
-β Port: 8080 β
-ββββββββββ¬βββββββββ
- β Pull-based (agents check in every 5 min)
- ββββββ΄βββββ¬βββββββββ
- β β β
-βββββΌβββ ββββΌβββ ββββΌββββ
-βLinux β βWindowsβ βLinux β
-βAgent β βAgent β βAgent β
-ββββββββ βββββββββ ββββββββ
-```
-
----
-
-## Documentation
-
-- **[API Reference](docs/API.md)** - Complete API documentation
-- **[Configuration](docs/CONFIGURATION.md)** - CLI flags, env vars, config files
-- **[Architecture](docs/ARCHITECTURE.md)** - System design and database schema
-- **[Development](docs/DEVELOPMENT.md)** - Build from source, testing, contributing
-
----
-
-## Security Notes
-
-RedFlag uses:
-- **Registration tokens** - One-time use tokens for secure agent enrollment
-- **Refresh tokens** - 90-day sliding window, auto-renewal for active agents
-- **SHA-256 hashing** - All tokens hashed at rest
-- **Rate limiting** - Configurable API protection
-- **Minimal privileges** - Agents run with least required permissions
-- **Ed25519 Signing** - All agent updates signed with server keys (v0.1.22+)
-- **Machine Binding** - Agents bound to hardware fingerprint (v0.1.22+)
-
-**File Flow & Update Security:**
-- All agent update packages are cryptographically signed
-- Setup wizard generates Ed25519 keypair during initial configuration
-- Agents validate signatures before installing any updates
-- File integrity verified with checksums and signatures
-- Controlled file flow prevents unauthorized updates
-
-For production deployments:
-1. Complete setup wizard to generate signing keys
-2. Use HTTPS/TLS
-3. Configure firewall rules
-4. Enable rate limiting
-5. Monitor security metrics dashboard
-
----
-
-## Current Status
-
-**What Works:**
-- β
Cross-platform agent registration and updates
-- β
Update scanning for all supported package managers
-- β
Dry-run dependency checking before installation
-- β
Real-time heartbeat and rapid polling
-- β
Multi-seat registration tokens
-- β
Native service integration (systemd, Windows Services)
-- β
Web dashboard with full agent management
-- β
Docker integration for container image updates
-
-**Known Issues:**
-- Windows Winget detection needs debugging
-- Some Windows Updates may reappear after installation (known Windows Update quirk)
-
-**Planned Features:**
-- Proxmox VM/container integration
-- Agent auto-update system
-- Mobile-responsive dashboard improvements
-
----
-
-## Development
-
-```bash
-# Start local development environment
-make db-up
-make server # Terminal 1
-make agent # Terminal 2
-make web # Terminal 3
-```
-
-See [docs/DEVELOPMENT.md](docs/DEVELOPMENT.md) for detailed build instructions.
-
----
-
-## Alpha Release Notice
-
-This is alpha software built for homelabs and self-hosters. It's functional and actively used, but:
-
-- Expect occasional bugs
-- Backup your data
-- Security model is solid but not audited
-- Breaking changes may happen between versions
-- Documentation is a work in progress
-
-That said, it works well for its intended use case. Issues and feedback welcome!
-
----
-
-## License
-
-MIT License - See [LICENSE](LICENSE) for details
-
-**Third-Party Components:**
-- Windows Update integration based on [windowsupdate](https://github.com/ceshihao/windowsupdate) (Apache 2.0)
-
----
-
-## Project Goals
-
-RedFlag aims to be:
-- **Simple** - Deploy in 5 minutes, understand in 10
-- **Honest** - No enterprise marketing speak, just useful software
-- **Homelab-first** - Built for real use cases, not investor pitches
-- **Self-hosted** - Your data, your infrastructure
-
-If you're looking for an enterprise-grade solution with SLAs and support contracts, this isn't it. If you want to manage updates across your homelab without SSH-ing into every server, welcome aboard.
-
----
-
-**Made with β for homelabbers, by homelabbers**
diff --git a/THIRD_PARTY_LICENSES.md b/THIRD_PARTY_LICENSES.md
deleted file mode 100644
index ae7c094..0000000
--- a/THIRD_PARTY_LICENSES.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Third-Party Licenses
-
-This document lists the third-party components and their licenses that are included in or required by RedFlag.
-
-## Windows Update Package (Apache 2.0)
-
-**Package**: `github.com/ceshihao/windowsupdate`
-**Version**: Included as vendored code in `aggregator-agent/pkg/windowsupdate/`
-**License**: Apache License 2.0
-**Copyright**: Copyright 2022 Zheng Dayu
-**Source**: https://github.com/ceshihao/windowsupdate
-**License File**: https://github.com/ceshihao/windowsupdate/blob/main/LICENSE
-
-### License Text
-
-```
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-```
-
-### Modifications
-
-The package has been modified for integration with RedFlag's update management system. Modifications include:
-
-- Integration with RedFlag's update reporting format
-- Added support for RedFlag's metadata structures
-- Compatibility with RedFlag's agent communication protocol
-
-All modifications maintain the original Apache 2.0 license.
-
----
-
-## License Compatibility
-
-RedFlag is licensed under the MIT License, which is compatible with the Apache License 2.0. Both are permissive open-source licenses that allow:
-
-- Commercial use
-- Modification
-- Distribution
-- Private use
-
-The MIT license requires preservation of copyright notices, which is fulfilled through this attribution.
\ No newline at end of file
diff --git a/aggregator-agent/agent b/aggregator-agent/agent
index df69cce..dfa4ef7 100755
Binary files a/aggregator-agent/agent and b/aggregator-agent/agent differ
diff --git a/aggregator-agent/go.mod b/aggregator-agent/go.mod
index 9a76862..f2a5f96 100644
--- a/aggregator-agent/go.mod
+++ b/aggregator-agent/go.mod
@@ -12,7 +12,6 @@ require (
)
require (
- github.com/Fimeg/RedFlag/aggregator v0.0.0
github.com/Microsoft/go-winio v0.4.21 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/distribution/reference v0.6.0 // indirect
@@ -37,5 +36,3 @@ require (
golang.org/x/time v0.5.0 // indirect
gotest.tools/v3 v3.5.2 // indirect
)
-
-replace github.com/Fimeg/RedFlag/aggregator => ../aggregator
diff --git a/aggregator-agent/internal/cache/local.go b/aggregator-agent/internal/cache/local.go
index f413a7d..fb7a14f 100644
--- a/aggregator-agent/internal/cache/local.go
+++ b/aggregator-agent/internal/cache/local.go
@@ -13,17 +13,17 @@ import (
// LocalCache stores scan results locally for offline viewing
type LocalCache struct {
- LastScanTime time.Time `json:"last_scan_time"`
- LastCheckIn time.Time `json:"last_check_in"`
- AgentID uuid.UUID `json:"agent_id"`
- ServerURL string `json:"server_url"`
- UpdateCount int `json:"update_count"`
- Updates []client.UpdateReportItem `json:"updates"`
- AgentStatus string `json:"agent_status"`
+ LastScanTime time.Time `json:"last_scan_time"`
+ LastCheckIn time.Time `json:"last_check_in"`
+ AgentID uuid.UUID `json:"agent_id"`
+ ServerURL string `json:"server_url"`
+ UpdateCount int `json:"update_count"`
+ Updates []client.UpdateReportItem `json:"updates"`
+ AgentStatus string `json:"agent_status"`
}
// CacheDir is the directory where local cache is stored
-const CacheDir = "/var/lib/redflag"
+const CacheDir = "/var/lib/redflag-agent"
// CacheFile is the file where scan results are cached
const CacheFile = "last_scan.json"
@@ -126,4 +126,4 @@ func (c *LocalCache) Clear() {
c.UpdateCount = 0
c.Updates = []client.UpdateReportItem{}
c.AgentStatus = ""
-}
\ No newline at end of file
+}
diff --git a/aggregator-agent/internal/client/client.go b/aggregator-agent/internal/client/client.go
index 43bbe74..e6e34ca 100644
--- a/aggregator-agent/internal/client/client.go
+++ b/aggregator-agent/internal/client/client.go
@@ -7,10 +7,13 @@ import (
"io"
"net/http"
"os"
+ "path/filepath"
"runtime"
"strings"
"time"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/event"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/models"
"github.com/Fimeg/RedFlag/aggregator-agent/internal/system"
"github.com/google/uuid"
)
@@ -23,6 +26,8 @@ type Client struct {
RapidPollingEnabled bool
RapidPollingUntil time.Time
machineID string // Cached machine ID for security binding
+ eventBuffer *event.Buffer
+ agentID uuid.UUID
}
// NewClient creates a new API client
@@ -45,6 +50,58 @@ func NewClient(baseURL, token string) *Client {
}
}
+// NewClientWithEventBuffer creates a new API client with event buffering capability
+func NewClientWithEventBuffer(baseURL, token string, statePath string, agentID uuid.UUID) *Client {
+ client := NewClient(baseURL, token)
+ client.agentID = agentID
+
+ // Initialize event buffer if state path is provided
+ if statePath != "" {
+ eventBufferPath := filepath.Join(statePath, "events_buffer.json")
+ client.eventBuffer = event.NewBuffer(eventBufferPath)
+ }
+
+ return client
+}
+
+// bufferEvent buffers a system event for later reporting
+func (c *Client) bufferEvent(eventType, eventSubtype, severity, component, message string, metadata map[string]interface{}) {
+ if c.eventBuffer == nil {
+ return // Event buffering not enabled
+ }
+
+ // Use agent ID if available, otherwise create event with nil agent ID
+ var agentIDPtr *uuid.UUID
+ if c.agentID != uuid.Nil {
+ agentIDPtr = &c.agentID
+ }
+
+ event := &models.SystemEvent{
+ ID: uuid.New(),
+ AgentID: agentIDPtr,
+ EventType: eventType,
+ EventSubtype: eventSubtype,
+ Severity: severity,
+ Component: component,
+ Message: message,
+ Metadata: metadata,
+ CreatedAt: time.Now(),
+ }
+
+ // Buffer the event (best effort - don't fail if buffering fails)
+ if err := c.eventBuffer.BufferEvent(event); err != nil {
+ fmt.Printf("Warning: Failed to buffer event: %v\n", err)
+ }
+}
+
+// GetBufferedEvents returns all buffered events and clears the buffer
+func (c *Client) GetBufferedEvents() ([]*models.SystemEvent, error) {
+ if c.eventBuffer == nil {
+ return nil, nil // Event buffering not enabled
+ }
+ return c.eventBuffer.GetBufferedEvents()
+}
+
// addMachineIDHeader adds X-Machine-ID header to authenticated requests (v0.1.22+)
func (c *Client) addMachineIDHeader(req *http.Request) {
if c.machineID != "" {
@@ -95,11 +152,25 @@ func (c *Client) Register(req RegisterRequest) (*RegisterResponse, error) {
body, err := json.Marshal(req)
if err != nil {
+ // Buffer registration failure event
+ c.bufferEvent("registration_failure", "marshal_error", "error", "client",
+ fmt.Sprintf("Failed to marshal registration request: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "hostname": req.Hostname,
+ })
return nil, err
}
httpReq, err := http.NewRequest("POST", url, bytes.NewBuffer(body))
if err != nil {
+ // Buffer registration failure event
+ c.bufferEvent("registration_failure", "request_creation_error", "error", "client",
+ fmt.Sprintf("Failed to create registration request: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "hostname": req.Hostname,
+ })
return nil, err
}
httpReq.Header.Set("Content-Type", "application/json")
@@ -112,22 +183,49 @@ func (c *Client) Register(req RegisterRequest) (*RegisterResponse, error) {
resp, err := c.http.Do(httpReq)
if err != nil {
+ // Buffer registration failure event
+ c.bufferEvent("registration_failure", "network_error", "error", "client",
+ fmt.Sprintf("Registration request failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "hostname": req.Hostname,
+ "server_url": c.baseURL,
+ })
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(resp.Body)
- return nil, fmt.Errorf("registration failed: %s - %s", resp.Status, string(bodyBytes))
+ errorMsg := fmt.Sprintf("registration failed: %s - %s", resp.Status, string(bodyBytes))
+
+ // Buffer registration failure event
+ c.bufferEvent("registration_failure", "api_error", "error", "client",
+ errorMsg,
+ map[string]interface{}{
+ "status_code": resp.StatusCode,
+ "response_body": string(bodyBytes),
+ "hostname": req.Hostname,
+ "server_url": c.baseURL,
+ })
+ return nil, fmt.Errorf(errorMsg)
}
var result RegisterResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
+ // Buffer registration failure event
+ c.bufferEvent("registration_failure", "decode_error", "error", "client",
+ fmt.Sprintf("Failed to decode registration response: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "hostname": req.Hostname,
+ })
return nil, err
}
- // Update client token
+ // Update client token and agent ID
c.token = result.Token
+ c.agentID = result.AgentID
return &result, nil
}
@@ -136,6 +234,7 @@ func (c *Client) Register(req RegisterRequest) (*RegisterResponse, error) {
type TokenRenewalRequest struct {
AgentID uuid.UUID `json:"agent_id"`
RefreshToken string `json:"refresh_token"`
+ AgentVersion string `json:"agent_version,omitempty"` // Agent's current version for upgrade tracking
}
// TokenRenewalResponse is returned after successful token renewal
@@ -144,38 +243,79 @@ type TokenRenewalResponse struct {
}
// RenewToken uses refresh token to get a new access token (proper implementation)
-func (c *Client) RenewToken(agentID uuid.UUID, refreshToken string) error {
+func (c *Client) RenewToken(agentID uuid.UUID, refreshToken string, agentVersion string) error {
url := fmt.Sprintf("%s/api/v1/agents/renew", c.baseURL)
renewalReq := TokenRenewalRequest{
AgentID: agentID,
RefreshToken: refreshToken,
+ AgentVersion: agentVersion,
}
body, err := json.Marshal(renewalReq)
if err != nil {
+ // Buffer token renewal failure event
+ c.bufferEvent("token_renewal_failure", "marshal_error", "error", "client",
+ fmt.Sprintf("Failed to marshal token renewal request: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "agent_id": agentID.String(),
+ })
return err
}
httpReq, err := http.NewRequest("POST", url, bytes.NewBuffer(body))
if err != nil {
+ // Buffer token renewal failure event
+ c.bufferEvent("token_renewal_failure", "request_creation_error", "error", "client",
+ fmt.Sprintf("Failed to create token renewal request: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "agent_id": agentID.String(),
+ })
return err
}
httpReq.Header.Set("Content-Type", "application/json")
resp, err := c.http.Do(httpReq)
if err != nil {
+ // Buffer token renewal failure event
+ c.bufferEvent("token_renewal_failure", "network_error", "error", "client",
+ fmt.Sprintf("Token renewal request failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "agent_id": agentID.String(),
+ "server_url": c.baseURL,
+ })
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(resp.Body)
- return fmt.Errorf("token renewal failed: %s - %s", resp.Status, string(bodyBytes))
+ errorMsg := fmt.Sprintf("token renewal failed: %s - %s", resp.Status, string(bodyBytes))
+
+ // Buffer token renewal failure event
+ c.bufferEvent("token_renewal_failure", "api_error", "error", "client",
+ errorMsg,
+ map[string]interface{}{
+ "status_code": resp.StatusCode,
+ "response_body": string(bodyBytes),
+ "agent_id": agentID.String(),
+ "server_url": c.baseURL,
+ })
+ return fmt.Errorf(errorMsg)
}
var result TokenRenewalResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
+ // Buffer token renewal failure event
+ c.bufferEvent("token_renewal_failure", "decode_error", "error", "client",
+ fmt.Sprintf("Failed to decode token renewal response: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "agent_id": agentID.String(),
+ })
return err
}
@@ -187,11 +327,15 @@ func (c *Client) RenewToken(agentID uuid.UUID, refreshToken string) error {
// Command represents a command from the server
type Command struct {
- ID string `json:"id"`
- Type string `json:"type"`
- Params map[string]interface{} `json:"params"`
+ ID string `json:"id"`
+ Type string `json:"type"`
+ Params map[string]interface{} `json:"params"`
+ Signature string `json:"signature,omitempty"` // Ed25519 signature of the command
}
+// CommandItem is an alias for Command for consistency with server models
+type CommandItem = Command
+
// CommandsResponse contains pending commands
type CommandsResponse struct {
Commands []Command `json:"commands"`
diff --git a/aggregator/pkg/common/agentfile.go b/aggregator-agent/internal/common/agentfile.go
similarity index 100%
rename from aggregator/pkg/common/agentfile.go
rename to aggregator-agent/internal/common/agentfile.go
diff --git a/aggregator-agent/internal/config/config.go b/aggregator-agent/internal/config/config.go
index adadd8b..3a9ad61 100644
--- a/aggregator-agent/internal/config/config.go
+++ b/aggregator-agent/internal/config/config.go
@@ -5,11 +5,24 @@ import (
"fmt"
"os"
"path/filepath"
+ "strings"
"time"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/version"
"github.com/google/uuid"
)
+// MigrationState tracks migration completion status (used by migration package)
+type MigrationState struct {
+ LastCompleted map[string]time.Time `json:"last_completed"`
+ AgentVersion string `json:"agent_version"`
+ ConfigVersion string `json:"config_version"`
+ Timestamp time.Time `json:"timestamp"`
+ Success bool `json:"success"`
+ RollbackPath string `json:"rollback_path,omitempty"`
+ CompletedMigrations []string `json:"completed_migrations"`
+}
+
// ProxyConfig holds proxy configuration
type ProxyConfig struct {
Enabled bool `json:"enabled"`
@@ -45,6 +58,24 @@ type LoggingConfig struct {
MaxAge int `json:"max_age"` // Max age of log files in days
}
+// SecurityLogConfig holds configuration for security logging
+type SecurityLogConfig struct {
+ Enabled bool `json:"enabled" env:"REDFLAG_AGENT_SECURITY_LOG_ENABLED" default:"true"`
+ Level string `json:"level" env:"REDFLAG_AGENT_SECURITY_LOG_LEVEL" default:"warning"` // none, error, warn, info, debug
+ LogSuccesses bool `json:"log_successes" env:"REDFLAG_AGENT_SECURITY_LOG_SUCCESSES" default:"false"`
+ FilePath string `json:"file_path" env:"REDFLAG_AGENT_SECURITY_LOG_PATH"` // Relative to agent data directory
+ MaxSizeMB int `json:"max_size_mb" env:"REDFLAG_AGENT_SECURITY_LOG_MAX_SIZE" default:"50"`
+ MaxFiles int `json:"max_files" env:"REDFLAG_AGENT_SECURITY_LOG_MAX_FILES" default:"5"`
+ BatchSize int `json:"batch_size" env:"REDFLAG_AGENT_SECURITY_LOG_BATCH_SIZE" default:"10"`
+ SendToServer bool `json:"send_to_server" env:"REDFLAG_AGENT_SECURITY_LOG_SEND" default:"true"`
+}
+
+// CommandSigningConfig holds configuration for command signature verification
+type CommandSigningConfig struct {
+ Enabled bool `json:"enabled" env:"REDFLAG_AGENT_COMMAND_SIGNING_ENABLED" default:"true"`
+ EnforcementMode string `json:"enforcement_mode" env:"REDFLAG_AGENT_COMMAND_ENFORCEMENT_MODE" default:"strict"` // strict, warning, disabled
+}
+
// Config holds agent configuration
type Config struct {
// Version Information
@@ -79,6 +110,12 @@ type Config struct {
// Logging Configuration
Logging LoggingConfig `json:"logging,omitempty"`
+ // Security Logging Configuration
+ SecurityLogging SecurityLogConfig `json:"security_logging,omitempty"`
+
+ // Command Signing Configuration
+ CommandSigning CommandSigningConfig `json:"command_signing,omitempty"`
+
// Agent Metadata
Tags []string `json:"tags,omitempty"` // User-defined tags
Metadata map[string]string `json:"metadata,omitempty"` // Custom metadata
@@ -87,6 +124,9 @@ type Config struct {
// Subsystem Configuration
Subsystems SubsystemsConfig `json:"subsystems,omitempty"` // Scanner subsystem configs
+
+ // Migration State
+ MigrationState *MigrationState `json:"migration_state,omitempty"` // Migration completion tracking
}
// Load reads configuration from multiple sources with priority order:
@@ -95,12 +135,11 @@ type Config struct {
// 3. Configuration file
// 4. Default values
func Load(configPath string, cliFlags *CLIFlags) (*Config, error) {
- // Start with defaults
- config := getDefaultConfig()
-
- // Load from config file if it exists
- if fileConfig, err := loadFromFile(configPath); err == nil {
- mergeConfig(config, fileConfig)
+ // Load existing config from file first
+ config, err := loadFromFile(configPath)
+ if err != nil {
+ // Only use defaults if file doesn't exist or can't be read
+ config = getDefaultConfig()
}
// Override with environment variables
@@ -134,13 +173,53 @@ type CLIFlags struct {
InsecureTLS bool
}
+// getConfigVersionForAgent extracts the config version from the agent version
+// Agent version format: v0.1.23.6 where the fourth octet (.6) maps to config version
+func getConfigVersionForAgent(agentVersion string) string {
+ // Strip 'v' prefix if present
+ cleanVersion := strings.TrimPrefix(agentVersion, "v")
+
+ // Split version parts
+ parts := strings.Split(cleanVersion, ".")
+ if len(parts) == 4 {
+ // Return the fourth octet as the config version
+ // v0.1.23.6 β "6"
+ return parts[3]
+ }
+
+ // TODO: Integrate with global error logging system when available
+ // For now, default to "6" to match current agent version
+ return "6"
+}
+
// getDefaultConfig returns default configuration values
func getDefaultConfig() *Config {
+ // Use version package for single source of truth
+ configVersion := version.ConfigVersion
+ if configVersion == "dev" {
+ // Fallback to extracting from agent version if not injected
+ configVersion = version.ExtractConfigVersionFromAgent(version.Version)
+ }
+
return &Config{
- Version: "4", // Current config schema version
- AgentVersion: "", // Will be set by the agent at startup
+ Version: configVersion, // Config schema version from version package
+ AgentVersion: version.Version, // Agent version from version package
ServerURL: "http://localhost:8080",
CheckInInterval: 300, // 5 minutes
+
+ // Server Authentication
+ RegistrationToken: "", // One-time registration token (embedded by install script)
+ AgentID: uuid.Nil, // Will be set during registration
+ Token: "", // Will be set during registration
+ RefreshToken: "", // Will be set during registration
+
+ // Agent Behavior
+ RapidPollingEnabled: false,
+ RapidPollingUntil: time.Time{},
+
+ // Network Security
+ Proxy: ProxyConfig{},
+ TLS: TLSConfig{},
Network: NetworkConfig{
Timeout: 30 * time.Second,
RetryCount: 3,
@@ -153,6 +232,20 @@ func getDefaultConfig() *Config {
MaxBackups: 3,
MaxAge: 28, // 28 days
},
+ SecurityLogging: SecurityLogConfig{
+ Enabled: true,
+ Level: "warning",
+ LogSuccesses: false,
+ FilePath: "security.log",
+ MaxSizeMB: 50,
+ MaxFiles: 5,
+ BatchSize: 10,
+ SendToServer: true,
+ },
+ CommandSigning: CommandSigningConfig{
+ Enabled: true,
+ EnforcementMode: "strict",
+ },
Subsystems: GetDefaultSubsystemsConfig(),
Tags: []string{},
Metadata: make(map[string]string),
@@ -171,32 +264,36 @@ func loadFromFile(configPath string) (*Config, error) {
data, err := os.ReadFile(configPath)
if err != nil {
if os.IsNotExist(err) {
- return getDefaultConfig(), nil // Return defaults if file doesn't exist
+ return nil, fmt.Errorf("config file does not exist") // Return error so caller uses defaults
}
return nil, fmt.Errorf("failed to read config: %w", err)
}
- // Start with latest default config
- config := getDefaultConfig()
-
- // Parse the existing config into a generic map to handle missing fields
+ // Parse the existing config into a generic map to preserve all fields
var rawConfig map[string]interface{}
if err := json.Unmarshal(data, &rawConfig); err != nil {
return nil, fmt.Errorf("failed to parse config: %w", err)
}
- // Marshal back to JSON and unmarshal into our new structure
- // This ensures missing fields get default values from getDefaultConfig()
+ // Create a new config with ALL defaults to fill missing fields
+ config := getDefaultConfig()
+
+ // Carefully merge the loaded config into our defaults
+ // This preserves existing values while filling missing ones with defaults
configJSON, err := json.Marshal(rawConfig)
if err != nil {
return nil, fmt.Errorf("failed to re-marshal config: %w", err)
}
- // Carefully merge into our config structure, preserving defaults for missing fields
- if err := json.Unmarshal(configJSON, &config); err != nil {
- return nil, fmt.Errorf("failed to merge config: %w", err)
+ // Create a temporary config to hold loaded values
+ tempConfig := &Config{}
+ if err := json.Unmarshal(configJSON, &tempConfig); err != nil {
+ return nil, fmt.Errorf("failed to unmarshal temp config: %w", err)
}
+ // Merge loaded config into defaults (only non-zero values)
+ mergeConfigPreservingDefaults(config, tempConfig)
+
// Handle specific migrations for known breaking changes
migrateConfig(config)
@@ -205,10 +302,19 @@ func loadFromFile(configPath string) (*Config, error) {
// migrateConfig handles specific known migrations between config versions
func migrateConfig(cfg *Config) {
+ // Save the registration token before migration
+ savedRegistrationToken := cfg.RegistrationToken
+
// Update config schema version to latest
- if cfg.Version != "5" {
- fmt.Printf("[CONFIG] Migrating config schema from version %s to 5\n", cfg.Version)
- cfg.Version = "5"
+ targetVersion := version.ConfigVersion
+ if targetVersion == "dev" {
+ // Fallback to extracting from agent version
+ targetVersion = version.ExtractConfigVersionFromAgent(version.Version)
+ }
+
+ if cfg.Version != targetVersion {
+ fmt.Printf("[CONFIG] Migrating config schema from version %s to %s\n", cfg.Version, targetVersion)
+ cfg.Version = targetVersion
}
// Migration 1: Ensure minimum check-in interval (30 seconds)
@@ -227,6 +333,12 @@ func migrateConfig(cfg *Config) {
fmt.Printf("[CONFIG] Adding missing 'updates' subsystem configuration\n")
cfg.Subsystems.Updates = GetDefaultSubsystemsConfig().Updates
}
+
+ // CRITICAL: Restore the registration token after migration
+ // This ensures the token is never overwritten by migration logic
+ if savedRegistrationToken != "" {
+ cfg.RegistrationToken = savedRegistrationToken
+ }
}
// loadFromEnv loads configuration from environment variables
@@ -263,6 +375,32 @@ func loadFromEnv() *Config {
config.DisplayName = displayName
}
+ // Security logging environment variables
+ if secEnabled := os.Getenv("REDFLAG_AGENT_SECURITY_LOG_ENABLED"); secEnabled != "" {
+ if config.SecurityLogging == (SecurityLogConfig{}) {
+ config.SecurityLogging = SecurityLogConfig{}
+ }
+ config.SecurityLogging.Enabled = secEnabled == "true"
+ }
+ if secLevel := os.Getenv("REDFLAG_AGENT_SECURITY_LOG_LEVEL"); secLevel != "" {
+ if config.SecurityLogging == (SecurityLogConfig{}) {
+ config.SecurityLogging = SecurityLogConfig{}
+ }
+ config.SecurityLogging.Level = secLevel
+ }
+ if secLogSucc := os.Getenv("REDFLAG_AGENT_SECURITY_LOG_SUCCESSES"); secLogSucc != "" {
+ if config.SecurityLogging == (SecurityLogConfig{}) {
+ config.SecurityLogging = SecurityLogConfig{}
+ }
+ config.SecurityLogging.LogSuccesses = secLogSucc == "true"
+ }
+ if secPath := os.Getenv("REDFLAG_AGENT_SECURITY_LOG_PATH"); secPath != "" {
+ if config.SecurityLogging == (SecurityLogConfig{}) {
+ config.SecurityLogging = SecurityLogConfig{}
+ }
+ config.SecurityLogging.FilePath = secPath
+ }
+
return config
}
@@ -341,6 +479,12 @@ func mergeConfig(target, source *Config) {
if source.Logging != (LoggingConfig{}) {
target.Logging = source.Logging
}
+ if source.SecurityLogging != (SecurityLogConfig{}) {
+ target.SecurityLogging = source.SecurityLogging
+ }
+ if source.CommandSigning != (CommandSigningConfig{}) {
+ target.CommandSigning = source.CommandSigning
+ }
// Merge metadata
if source.Tags != nil {
@@ -436,3 +580,89 @@ func (c *Config) NeedsRegistration() bool {
func (c *Config) HasRegistrationToken() bool {
return c.RegistrationToken != ""
}
+
+// mergeConfigPreservingDefaults merges source config into target config
+// but only overwrites fields that are explicitly set (non-zero)
+// This is different from mergeConfig which blindly copies non-zero values
+func mergeConfigPreservingDefaults(target, source *Config) {
+ // Server Configuration
+ if source.ServerURL != "" && source.ServerURL != getDefaultConfig().ServerURL {
+ target.ServerURL = source.ServerURL
+ }
+ // IMPORTANT: Never overwrite registration token if target already has one
+ if source.RegistrationToken != "" && target.RegistrationToken == "" {
+ target.RegistrationToken = source.RegistrationToken
+ }
+
+ // Agent Configuration
+ if source.CheckInInterval != 0 {
+ target.CheckInInterval = source.CheckInInterval
+ }
+ if source.AgentID != uuid.Nil {
+ target.AgentID = source.AgentID
+ }
+ if source.Token != "" {
+ target.Token = source.Token
+ }
+ if source.RefreshToken != "" {
+ target.RefreshToken = source.RefreshToken
+ }
+
+ // Merge nested configs only if they're not default values
+ if source.Network != (NetworkConfig{}) {
+ target.Network = source.Network
+ }
+ if source.Proxy != (ProxyConfig{}) {
+ target.Proxy = source.Proxy
+ }
+ if source.TLS != (TLSConfig{}) {
+ target.TLS = source.TLS
+ }
+ if source.Logging != (LoggingConfig{}) && source.Logging.Level != "" {
+ target.Logging = source.Logging
+ }
+ if source.SecurityLogging != (SecurityLogConfig{}) {
+ target.SecurityLogging = source.SecurityLogging
+ }
+ if source.CommandSigning != (CommandSigningConfig{}) {
+ target.CommandSigning = source.CommandSigning
+ }
+
+ // Merge metadata
+ if source.Tags != nil && len(source.Tags) > 0 {
+ target.Tags = source.Tags
+ }
+ if source.Metadata != nil {
+ if target.Metadata == nil {
+ target.Metadata = make(map[string]string)
+ }
+ for k, v := range source.Metadata {
+ target.Metadata[k] = v
+ }
+ }
+ if source.DisplayName != "" {
+ target.DisplayName = source.DisplayName
+ }
+ if source.Organization != "" {
+ target.Organization = source.Organization
+ }
+
+ // Merge rapid polling settings
+ target.RapidPollingEnabled = source.RapidPollingEnabled
+ if !source.RapidPollingUntil.IsZero() {
+ target.RapidPollingUntil = source.RapidPollingUntil
+ }
+
+ // Merge subsystems config
+ if source.Subsystems != (SubsystemsConfig{}) {
+ target.Subsystems = source.Subsystems
+ }
+
+ // Version info
+ if source.Version != "" {
+ target.Version = source.Version
+ }
+ if source.AgentVersion != "" {
+ target.AgentVersion = source.AgentVersion
+ }
+}
diff --git a/aggregator-agent/internal/config/subsystems.go b/aggregator-agent/internal/config/subsystems.go
index d856a8c..8e44b00 100644
--- a/aggregator-agent/internal/config/subsystems.go
+++ b/aggregator-agent/internal/config/subsystems.go
@@ -7,6 +7,10 @@ type SubsystemConfig struct {
// Execution settings
Enabled bool `json:"enabled"`
Timeout time.Duration `json:"timeout"` // Timeout for this subsystem
+
+ // Interval for this subsystem (in minutes)
+ // This controls how often the server schedules scans for this subsystem
+ IntervalMinutes int `json:"interval_minutes,omitempty"`
// Circuit breaker settings
CircuitBreaker CircuitBreakerConfig `json:"circuit_breaker"`
@@ -64,44 +68,52 @@ func GetDefaultSubsystemsConfig() SubsystemsConfig {
return SubsystemsConfig{
System: SubsystemConfig{
- Enabled: true, // System scanner always available
- Timeout: 10 * time.Second, // System info should be fast
- CircuitBreaker: defaultCB,
+ Enabled: true, // System scanner always available
+ Timeout: 10 * time.Second, // System info should be fast
+ IntervalMinutes: 5, // Default: 5 minutes
+ CircuitBreaker: defaultCB,
},
Updates: SubsystemConfig{
- Enabled: true, // Virtual subsystem for package update scheduling
- Timeout: 0, // Not used - delegates to individual package scanners
- CircuitBreaker: CircuitBreakerConfig{Enabled: false}, // No circuit breaker for virtual subsystem
+ Enabled: true, // Virtual subsystem for package update scheduling
+ Timeout: 0, // Not used - delegates to individual package scanners
+ IntervalMinutes: 15, // Default: 15 minutes
+ CircuitBreaker: CircuitBreakerConfig{Enabled: false}, // No circuit breaker for virtual subsystem
},
APT: SubsystemConfig{
- Enabled: true,
- Timeout: 30 * time.Second,
- CircuitBreaker: defaultCB,
+ Enabled: true,
+ Timeout: 30 * time.Second,
+ IntervalMinutes: 15, // Default: 15 minutes
+ CircuitBreaker: defaultCB,
},
DNF: SubsystemConfig{
- Enabled: true,
- Timeout: 15 * time.Minute, // TODO: Make scanner timeouts user-adjustable via settings. DNF operations can take a long time on large systems
- CircuitBreaker: defaultCB,
+ Enabled: true,
+ Timeout: 15 * time.Minute, // TODO: Make scanner timeouts user-adjustable via settings. DNF operations can take a long time on large systems
+ IntervalMinutes: 15, // Default: 15 minutes
+ CircuitBreaker: defaultCB,
},
Docker: SubsystemConfig{
- Enabled: true,
- Timeout: 60 * time.Second, // Registry queries can be slow
- CircuitBreaker: defaultCB,
+ Enabled: true,
+ Timeout: 60 * time.Second, // Registry queries can be slow
+ IntervalMinutes: 15, // Default: 15 minutes
+ CircuitBreaker: defaultCB,
},
Windows: SubsystemConfig{
- Enabled: true,
- Timeout: 10 * time.Minute, // Windows Update can be VERY slow
- CircuitBreaker: windowsCB,
+ Enabled: true,
+ Timeout: 10 * time.Minute, // Windows Update can be VERY slow
+ IntervalMinutes: 15, // Default: 15 minutes
+ CircuitBreaker: windowsCB,
},
Winget: SubsystemConfig{
- Enabled: true,
- Timeout: 2 * time.Minute, // Winget has multiple retry strategies
- CircuitBreaker: defaultCB,
+ Enabled: true,
+ Timeout: 2 * time.Minute, // Winget has multiple retry strategies
+ IntervalMinutes: 15, // Default: 15 minutes
+ CircuitBreaker: defaultCB,
},
Storage: SubsystemConfig{
- Enabled: true,
- Timeout: 10 * time.Second, // Disk info should be fast
- CircuitBreaker: defaultCB,
+ Enabled: true,
+ Timeout: 10 * time.Second, // Disk info should be fast
+ IntervalMinutes: 5, // Default: 5 minutes
+ CircuitBreaker: defaultCB,
},
}
}
diff --git a/aggregator-agent/internal/crypto/verification.go b/aggregator-agent/internal/crypto/verification.go
new file mode 100644
index 0000000..d78df83
--- /dev/null
+++ b/aggregator-agent/internal/crypto/verification.go
@@ -0,0 +1,152 @@
+package crypto
+
+import (
+ "crypto/ed25519"
+ "crypto/sha256"
+ "encoding/hex"
+ "encoding/json"
+ "fmt"
+ "time"
+
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/client"
+)
+
+// CommandVerifier handles Ed25519 signature verification for commands
+type CommandVerifier struct {
+ // In the future, this could include:
+ // - Key rotation support
+ // - Multiple trusted keys
+ // - Revocation checking
+}
+
+// NewCommandVerifier creates a new command verifier
+func NewCommandVerifier() *CommandVerifier {
+ return &CommandVerifier{}
+}
+
+// VerifyCommand verifies that a command's signature is valid
+func (v *CommandVerifier) VerifyCommand(cmd client.Command, serverPubKey ed25519.PublicKey) error {
+ // Check if signature is present
+ if cmd.Signature == "" {
+ return fmt.Errorf("command missing signature")
+ }
+
+ // Decode the signature
+ sig, err := hex.DecodeString(cmd.Signature)
+ if err != nil {
+ return fmt.Errorf("invalid signature encoding: %w", err)
+ }
+
+ // Verify signature length
+ if len(sig) != ed25519.SignatureSize {
+ return fmt.Errorf("invalid signature length: expected %d bytes, got %d",
+ ed25519.SignatureSize, len(sig))
+ }
+
+ // Reconstruct the signed message
+ message, err := v.reconstructMessage(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to reconstruct message: %w", err)
+ }
+
+ // Verify the Ed25519 signature
+ if !ed25519.Verify(serverPubKey, message, sig) {
+ return fmt.Errorf("signature verification failed")
+ }
+
+ return nil
+}
+
+// reconstructMessage recreates the message that was signed by the server
+// This must exactly match the server's signing implementation
+func (v *CommandVerifier) reconstructMessage(cmd client.Command) ([]byte, error) {
+ // Marshal parameters to JSON
+ paramsJSON, err := json.Marshal(cmd.Params)
+ if err != nil {
+ return nil, fmt.Errorf("failed to marshal parameters: %w", err)
+ }
+
+ // Create SHA256 hash of parameters
+ paramsHash := sha256.Sum256(paramsJSON)
+ paramsHashHex := hex.EncodeToString(paramsHash[:])
+
+ // Create the message in the exact format the server uses
+ // Format: "ID:CommandType:ParamsHash"
+ message := fmt.Sprintf("%s:%s:%s",
+ cmd.ID,
+ cmd.Type,
+ paramsHashHex)
+
+ return []byte(message), nil
+}
+
+// VerifyCommandWithTimestamp verifies a command and checks its timestamp
+// This prevents replay attacks with old commands
+// Note: Timestamp verification requires the CreatedAt field which is not sent to agents
+// This method is kept for future enhancement when we add timestamp to the command payload
+func (v *CommandVerifier) VerifyCommandWithTimestamp(
+ cmd client.Command,
+ serverPubKey ed25519.PublicKey,
+ maxAge time.Duration,
+) error {
+ // First verify the signature
+ if err := v.VerifyCommand(cmd, serverPubKey); err != nil {
+ return err
+ }
+
+ // Timestamp checking is currently disabled as CreatedAt is not included in the command sent to agents
+ // TODO: Add CreatedAt to command payload if timestamp verification is needed
+
+ return nil
+}
+
+// VerifyCommandBatch verifies multiple commands efficiently
+// This is useful when processing multiple commands at once
+func (v *CommandVerifier) VerifyCommandBatch(
+ commands []client.Command,
+ serverPubKey ed25519.PublicKey,
+) []error {
+ errors := make([]error, len(commands))
+
+ for i, cmd := range commands {
+ errors[i] = v.VerifyCommand(cmd, serverPubKey)
+ }
+
+ return errors
+}
+
+// ExtractCommandIDFromSignature attempts to verify a signature and returns the command ID
+// This is useful for debugging and logging
+func (v *CommandVerifier) ExtractCommandIDFromSignature(
+ signature string,
+ expectedMessage string,
+ serverPubKey ed25519.PublicKey,
+) (string, error) {
+ // Decode signature
+ sig, err := hex.DecodeString(signature)
+ if err != nil {
+ return "", fmt.Errorf("invalid signature encoding: %w", err)
+ }
+
+ // Verify signature
+ if !ed25519.Verify(serverPubKey, []byte(expectedMessage), sig) {
+ return "", fmt.Errorf("signature verification failed")
+ }
+
+ // In a real implementation, we might embed the command ID in the signature
+ // For now, we return an empty string since the ID is part of the message
+ return "", nil
+}
+
+// CheckKeyRotation checks if a public key needs to be rotated
+// This is a placeholder for future key rotation support
+func (v *CommandVerifier) CheckKeyRotation(currentKey ed25519.PublicKey) (ed25519.PublicKey, bool, error) {
+ // In the future, this could:
+ // - Check a key rotation endpoint
+ // - Load multiple trusted keys
+ // - Implement key pinning with fallback
+ // - Handle emergency key revocation
+
+ // For now, just return the current key
+ return currentKey, false, nil
+}
\ No newline at end of file
diff --git a/aggregator-agent/internal/event/buffer.go b/aggregator-agent/internal/event/buffer.go
new file mode 100644
index 0000000..1324ef0
--- /dev/null
+++ b/aggregator-agent/internal/event/buffer.go
@@ -0,0 +1,135 @@
+package event
+
+import (
+ "encoding/json"
+ "fmt"
+ "os"
+ "path/filepath"
+
+ "github.com/google/uuid"
+
+ "sync"
+
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/models"
+)
+
+const (
+ defaultMaxBufferSize = 1000 // Max events to buffer
+)
+
+// Buffer handles local event buffering for offline resilience
+type Buffer struct {
+ filePath string
+ maxSize int
+ mu sync.Mutex
+}
+
+// NewBuffer creates a new event buffer with the specified file path
+func NewBuffer(filePath string) *Buffer {
+ return &Buffer{
+ filePath: filePath,
+ maxSize: defaultMaxBufferSize,
+ }
+}
+
+// BufferEvent saves an event to the local buffer file
+func (b *Buffer) BufferEvent(event *models.SystemEvent) error {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+
+ // Ensure event has an ID
+ if event.ID == uuid.Nil {
+ return fmt.Errorf("event ID cannot be nil")
+ }
+
+ // Create directory if needed
+ dir := filepath.Dir(b.filePath)
+ if err := os.MkdirAll(dir, 0755); err != nil {
+ return fmt.Errorf("failed to create buffer directory: %w", err)
+ }
+
+ // Read existing buffer
+ var events []*models.SystemEvent
+ if data, err := os.ReadFile(b.filePath); err == nil {
+ if err := json.Unmarshal(data, &events); err != nil {
+ // If we can't unmarshal, start fresh
+ events = []*models.SystemEvent{}
+ }
+ }
+
+ // Append new event
+ events = append(events, event)
+
+ // Keep only last N events if buffer too large (circular buffer)
+ if len(events) > b.maxSize {
+ events = events[len(events)-b.maxSize:]
+ }
+
+ // Write back to file
+ data, err := json.Marshal(events)
+ if err != nil {
+ return fmt.Errorf("failed to marshal events: %w", err)
+ }
+
+ if err := os.WriteFile(b.filePath, data, 0644); err != nil {
+ return fmt.Errorf("failed to write buffer file: %w", err)
+ }
+
+ return nil
+}
+
+// GetBufferedEvents retrieves and clears the buffer
+func (b *Buffer) GetBufferedEvents() ([]*models.SystemEvent, error) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+
+ // Read buffer file
+ var events []*models.SystemEvent
+ data, err := os.ReadFile(b.filePath)
+ if err != nil {
+ if os.IsNotExist(err) {
+ return nil, nil // No buffer file means no events
+ }
+ return nil, fmt.Errorf("failed to read buffer file: %w", err)
+ }
+
+ if err := json.Unmarshal(data, &events); err != nil {
+ return nil, fmt.Errorf("failed to unmarshal events: %w", err)
+ }
+
+ // Clear buffer file after reading
+ if err := os.Remove(b.filePath); err != nil && !os.IsNotExist(err) {
+ // Log warning but don't fail - events were still retrieved
+ fmt.Printf("Warning: Failed to clear buffer file: %v\n", err)
+ }
+
+ return events, nil
+}
+
+// SetMaxSize sets the maximum number of events to buffer
+func (b *Buffer) SetMaxSize(size int) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ b.maxSize = size
+}
+
+// GetStats returns buffer statistics
+func (b *Buffer) GetStats() (int, error) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+
+ data, err := os.ReadFile(b.filePath)
+ if err != nil {
+ if os.IsNotExist(err) {
+ return 0, nil
+ }
+ return 0, err
+ }
+
+ var events []*models.SystemEvent
+ if err := json.Unmarshal(data, &events); err != nil {
+ return 0, err
+ }
+
+ return len(events), nil
+}
\ No newline at end of file
diff --git a/aggregator-agent/internal/logging/example_integration.go b/aggregator-agent/internal/logging/example_integration.go
new file mode 100644
index 0000000..7c332f0
--- /dev/null
+++ b/aggregator-agent/internal/logging/example_integration.go
@@ -0,0 +1,138 @@
+package logging
+
+// This file contains example code showing how to integrate the security logger
+// into various parts of the agent application.
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/config"
+ "github.com/denisbrodbeck/machineid"
+)
+
+// Example of how to initialize the security logger in main.go
+func ExampleInitializeSecurityLogger(cfg *config.Config, dataDir string) (*SecurityLogger, error) {
+ // Create the security logger
+ securityLogger, err := NewSecurityLogger(cfg, dataDir)
+ if err != nil {
+ return nil, err
+ }
+
+ return securityLogger, nil
+}
+
+// Example of using the security logger in command executor
+func ExampleCommandExecution(securityLogger *SecurityLogger, command string, signature string) {
+ // Simulate signature verification
+ signatureValid := false // In real code, this would be actual verification
+
+ if !signatureValid {
+ securityLogger.LogCommandVerificationFailure(
+ "cmd-123",
+ "signature verification failed: crypto/rsa: verification error",
+ )
+ } else {
+ // Only log success if configured
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "INFO",
+ EventType: SecurityEventTypes.CmdSignatureVerificationSuccess,
+ Message: "Command signature verified successfully",
+ }
+ securityLogger.Log(event)
+ }
+}
+
+// Example of using the security logger in update handler
+func ExampleUpdateHandler(securityLogger *SecurityLogger, updateID string, updateData []byte, signature string) {
+ // Simulate nonce validation
+ nonceValid := false
+ if !nonceValid {
+ securityLogger.LogNonceValidationFailure(
+ "deadbeef-1234-5678-9abc-1234567890ef",
+ "nonce expired or reused",
+ )
+ }
+
+ // Simulate signature verification
+ signatureValid := false
+ if !signatureValid {
+ securityLogger.LogUpdateSignatureVerificationFailure(
+ updateID,
+ "signature does not match update data",
+ )
+ }
+}
+
+// Example of machine ID monitoring
+func ExampleMachineIDMonitoring(securityLogger *SecurityLogger) {
+ // Get current machine ID
+ currentID, err := machineid.ID()
+ if err != nil {
+ return
+ }
+
+ // In real code, you would store the previous ID somewhere
+ // This is just an example of how to log when it changes
+ previousID := "previous-machine-id-here"
+
+ if currentID != previousID {
+ securityLogger.LogMachineIDChangeDetected(
+ previousID,
+ currentID,
+ )
+ }
+}
+
+// Example of configuration monitoring
+func ExampleConfigMonitoring(securityLogger *SecurityLogger, configPath string) {
+ // In real code, you would calculate and store a hash of the config
+ // and validate it periodically
+ configTampered := true // Simulate detection
+
+ if configTampered {
+ securityLogger.LogConfigTamperingWarning(
+ configPath,
+ "configuration hash mismatch",
+ )
+ }
+}
+
+// Example of unauthorized command attempt
+func ExampleUnauthorizedCommand(securityLogger *SecurityLogger, command string) {
+ // Check if command is in allowed list
+ allowedCommands := map[string]bool{
+ "scan": true,
+ "update": true,
+ "cleanup": true,
+ }
+
+ if !allowedCommands[command] {
+ securityLogger.LogUnauthorizedCommandAttempt(
+ command,
+ "command not in allowed list",
+ )
+ }
+}
+
+// Example of sending security events to server
+func ExampleSendSecurityEvents(securityLogger *SecurityLogger, client interface{}) {
+ // Get batch of security events
+ events := securityLogger.GetBatch()
+ if len(events) > 0 {
+ // In real code, you would send these to the server
+ // If successful:
+ fmt.Printf("Sending %d security events to server...\n", len(events))
+
+ // Simulate successful send
+ success := true
+ if success {
+ securityLogger.ClearBatch()
+ fmt.Printf("Security events sent successfully\n")
+ } else {
+ // Events remain in buffer for next attempt
+ fmt.Printf("Failed to send security events, will retry\n")
+ }
+ }
+}
\ No newline at end of file
diff --git a/aggregator-agent/internal/logging/security_logger.go b/aggregator-agent/internal/logging/security_logger.go
new file mode 100644
index 0000000..9aeaf56
--- /dev/null
+++ b/aggregator-agent/internal/logging/security_logger.go
@@ -0,0 +1,444 @@
+package logging
+
+import (
+ "encoding/json"
+ "fmt"
+ "log"
+ "os"
+ "path/filepath"
+ "sync"
+ "time"
+
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/config"
+)
+
+// SecurityEvent represents a security event on the agent side
+// This is a simplified version of the server model to avoid circular dependencies
+type SecurityEvent struct {
+ Timestamp time.Time `json:"timestamp"`
+ Level string `json:"level"` // CRITICAL, WARNING, INFO, DEBUG
+ EventType string `json:"event_type"`
+ Message string `json:"message"`
+ Details map[string]interface{} `json:"details,omitempty"`
+}
+
+// SecurityLogConfig holds configuration for security logging on the agent
+type SecurityLogConfig struct {
+ Enabled bool `json:"enabled" env:"REDFLAG_AGENT_SECURITY_LOG_ENABLED" default:"true"`
+ Level string `json:"level" env:"REDFLAG_AGENT_SECURITY_LOG_LEVEL" default:"warning"` // none, error, warn, info, debug
+ LogSuccesses bool `json:"log_successes" env:"REDFLAG_AGENT_SECURITY_LOG_SUCCESSES" default:"false"`
+ FilePath string `json:"file_path" env:"REDFLAG_AGENT_SECURITY_LOG_PATH"` // Relative to agent data directory
+ MaxSizeMB int `json:"max_size_mb" env:"REDFLAG_AGENT_SECURITY_LOG_MAX_SIZE" default:"50"`
+ MaxFiles int `json:"max_files" env:"REDFLAG_AGENT_SECURITY_LOG_MAX_FILES" default:"5"`
+ BatchSize int `json:"batch_size" env:"REDFLAG_AGENT_SECURITY_LOG_BATCH_SIZE" default:"10"`
+ SendToServer bool `json:"send_to_server" env:"REDFLAG_AGENT_SECURITY_LOG_SEND" default:"true"`
+}
+
+// SecurityLogger handles security event logging on the agent
+type SecurityLogger struct {
+ config SecurityLogConfig
+ logger *log.Logger
+ file *os.File
+ mu sync.Mutex
+ buffer []*SecurityEvent
+ flushTimer *time.Timer
+ lastFlush time.Time
+ closed bool
+}
+
+// SecurityEventTypes defines all possible security event types on the agent
+var SecurityEventTypes = struct {
+ CmdSignatureVerificationFailed string
+ CmdSignatureVerificationSuccess string
+ UpdateNonceInvalid string
+ UpdateSignatureVerificationFailed string
+ MachineIDChangeDetected string
+ ConfigTamperingWarning string
+ UnauthorizedCommandAttempt string
+}{
+ CmdSignatureVerificationFailed: "CMD_SIGNATURE_VERIFICATION_FAILED",
+ CmdSignatureVerificationSuccess: "CMD_SIGNATURE_VERIFICATION_SUCCESS",
+ UpdateNonceInvalid: "UPDATE_NONCE_INVALID",
+ UpdateSignatureVerificationFailed: "UPDATE_SIGNATURE_VERIFICATION_FAILED",
+ MachineIDChangeDetected: "MACHINE_ID_CHANGE_DETECTED",
+ ConfigTamperingWarning: "CONFIG_TAMPERING_WARNING",
+ UnauthorizedCommandAttempt: "UNAUTHORIZED_COMMAND_ATTEMPT",
+}
+
+// NewSecurityLogger creates a new agent security logger
+func NewSecurityLogger(agentConfig *config.Config, logDir string) (*SecurityLogger, error) {
+ // Create default security log config
+ secConfig := SecurityLogConfig{
+ Enabled: true,
+ Level: "warning",
+ LogSuccesses: false,
+ FilePath: "security.log",
+ MaxSizeMB: 50,
+ MaxFiles: 5,
+ BatchSize: 10,
+ SendToServer: true,
+ }
+
+ // Ensure log directory exists
+ if err := os.MkdirAll(logDir, 0755); err != nil {
+ return nil, fmt.Errorf("failed to create security log directory: %w", err)
+ }
+
+ // Open log file
+ logPath := filepath.Join(logDir, secConfig.FilePath)
+ file, err := os.OpenFile(logPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0600)
+ if err != nil {
+ return nil, fmt.Errorf("failed to open security log file: %w", err)
+ }
+
+ logger := &SecurityLogger{
+ config: secConfig,
+ logger: log.New(file, "[SECURITY] ", log.LstdFlags|log.LUTC),
+ file: file,
+ buffer: make([]*SecurityEvent, 0, secConfig.BatchSize),
+ lastFlush: time.Now(),
+ }
+
+ // Start flush timer
+ logger.flushTimer = time.AfterFunc(30*time.Second, logger.flushBuffer)
+
+ return logger, nil
+}
+
+// Log writes a security event
+func (sl *SecurityLogger) Log(event *SecurityEvent) error {
+ if !sl.config.Enabled || sl.config.Level == "none" {
+ return nil
+ }
+
+ // Skip successes unless configured to log them
+ if !sl.config.LogSuccesses && event.EventType == SecurityEventTypes.CmdSignatureVerificationSuccess {
+ return nil
+ }
+
+ // Filter by log level
+ if !sl.shouldLogLevel(event.Level) {
+ return nil
+ }
+
+ sl.mu.Lock()
+ defer sl.mu.Unlock()
+
+ if sl.closed {
+ return fmt.Errorf("security logger is closed")
+ }
+
+ // Add prefix to distinguish security events
+ event.Message = "SECURITY: " + event.Message
+
+ // Write immediately for critical events
+ if event.Level == "CRITICAL" {
+ return sl.writeEvent(event)
+ }
+
+ // Add to buffer
+ sl.buffer = append(sl.buffer, event)
+
+ // Flush if buffer is full
+ if len(sl.buffer) >= sl.config.BatchSize {
+ sl.flushBufferUnsafe()
+ }
+
+ return nil
+}
+
+// LogCommandVerificationFailure logs a command signature verification failure
+func (sl *SecurityLogger) LogCommandVerificationFailure(commandID string, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "CRITICAL",
+ EventType: SecurityEventTypes.CmdSignatureVerificationFailed,
+ Message: "Command signature verification failed",
+ Details: map[string]interface{}{
+ "command_id": commandID,
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogNonceValidationFailure logs a nonce validation failure
+func (sl *SecurityLogger) LogNonceValidationFailure(nonce string, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "WARNING",
+ EventType: SecurityEventTypes.UpdateNonceInvalid,
+ Message: "Update nonce validation failed",
+ Details: map[string]interface{}{
+ "nonce": nonce[:min(len(nonce), 16)] + "...", // Truncate for security
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogUpdateSignatureVerificationFailure logs an update signature verification failure
+func (sl *SecurityLogger) LogUpdateSignatureVerificationFailure(updateID string, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "CRITICAL",
+ EventType: SecurityEventTypes.UpdateSignatureVerificationFailed,
+ Message: "Update signature verification failed",
+ Details: map[string]interface{}{
+ "update_id": updateID,
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogMachineIDChangeDetected logs when machine ID changes
+func (sl *SecurityLogger) LogMachineIDChangeDetected(oldID, newID string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "WARNING",
+ EventType: SecurityEventTypes.MachineIDChangeDetected,
+ Message: "Machine ID change detected",
+ Details: map[string]interface{}{
+ "old_machine_id": oldID,
+ "new_machine_id": newID,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogConfigTamperingWarning logs when configuration tampering is suspected
+func (sl *SecurityLogger) LogConfigTamperingWarning(configPath string, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "WARNING",
+ EventType: SecurityEventTypes.ConfigTamperingWarning,
+ Message: "Configuration file tampering detected",
+ Details: map[string]interface{}{
+ "config_file": configPath,
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogUnauthorizedCommandAttempt logs an attempt to run an unauthorized command
+func (sl *SecurityLogger) LogUnauthorizedCommandAttempt(command string, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "WARNING",
+ EventType: SecurityEventTypes.UnauthorizedCommandAttempt,
+ Message: "Unauthorized command execution attempt",
+ Details: map[string]interface{}{
+ "command": command,
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogCommandVerificationSuccess logs a successful command signature verification
+func (sl *SecurityLogger) LogCommandVerificationSuccess(commandID string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "INFO",
+ EventType: SecurityEventTypes.CmdSignatureVerificationSuccess,
+ Message: "Command signature verified successfully",
+ Details: map[string]interface{}{
+ "command_id": commandID,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogCommandVerificationFailed logs a failed command signature verification
+func (sl *SecurityLogger) LogCommandVerificationFailed(commandID, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "CRITICAL",
+ EventType: SecurityEventTypes.CmdSignatureVerificationFailed,
+ Message: "Command signature verification failed",
+ Details: map[string]interface{}{
+ "command_id": commandID,
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogCommandSkipped logs when a command is skipped due to signing configuration
+func (sl *SecurityLogger) LogCommandSkipped(commandID, reason string) {
+ if sl == nil {
+ return
+ }
+
+ event := &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: "INFO",
+ EventType: "COMMAND_SKIPPED",
+ Message: "Command skipped due to signing configuration",
+ Details: map[string]interface{}{
+ "command_id": commandID,
+ "reason": reason,
+ },
+ }
+
+ _ = sl.Log(event)
+}
+
+// GetBatch returns a batch of events for sending to server
+func (sl *SecurityLogger) GetBatch() []*SecurityEvent {
+ sl.mu.Lock()
+ defer sl.mu.Unlock()
+
+ if len(sl.buffer) == 0 {
+ return nil
+ }
+
+ // Copy buffer
+ batch := make([]*SecurityEvent, len(sl.buffer))
+ copy(batch, sl.buffer)
+
+ // Clear buffer
+ sl.buffer = sl.buffer[:0]
+
+ return batch
+}
+
+// ClearBatch clears the buffer after successful send to server
+func (sl *SecurityLogger) ClearBatch() {
+ sl.mu.Lock()
+ defer sl.mu.Unlock()
+ sl.buffer = sl.buffer[:0]
+}
+
+// writeEvent writes an event to the log file
+func (sl *SecurityLogger) writeEvent(event *SecurityEvent) error {
+ jsonData, err := json.Marshal(event)
+ if err != nil {
+ return fmt.Errorf("failed to marshal security event: %w", err)
+ }
+
+ sl.logger.Println(string(jsonData))
+ return nil
+}
+
+// flushBuffer flushes all buffered events to file
+func (sl *SecurityLogger) flushBuffer() {
+ sl.mu.Lock()
+ defer sl.mu.Unlock()
+ sl.flushBufferUnsafe()
+}
+
+// flushBufferUnsafe flushes buffer without acquiring lock (must be called with lock held)
+func (sl *SecurityLogger) flushBufferUnsafe() {
+ for _, event := range sl.buffer {
+ if err := sl.writeEvent(event); err != nil {
+ log.Printf("[ERROR] Failed to write security event: %v", err)
+ }
+ }
+
+ sl.buffer = sl.buffer[:0]
+ sl.lastFlush = time.Now()
+
+ // Reset timer if not closed
+ if !sl.closed && sl.flushTimer != nil {
+ sl.flushTimer.Stop()
+ sl.flushTimer.Reset(30 * time.Second)
+ }
+}
+
+// shouldLogLevel checks if the event should be logged based on the configured level
+func (sl *SecurityLogger) shouldLogLevel(eventLevel string) bool {
+ levels := map[string]int{
+ "NONE": 0,
+ "ERROR": 1,
+ "WARNING": 2,
+ "INFO": 3,
+ "DEBUG": 4,
+ }
+
+ configLevel := levels[sl.config.Level]
+ eventLvl, exists := levels[eventLevel]
+ if !exists {
+ eventLvl = 2 // Default to WARNING
+ }
+
+ return eventLvl <= configLevel
+}
+
+// Close closes the security logger
+func (sl *SecurityLogger) Close() error {
+ sl.mu.Lock()
+ defer sl.mu.Unlock()
+
+ if sl.closed {
+ return nil
+ }
+
+ // Stop flush timer
+ if sl.flushTimer != nil {
+ sl.flushTimer.Stop()
+ }
+
+ // Flush remaining events
+ sl.flushBufferUnsafe()
+
+ // Close file
+ if sl.file != nil {
+ err := sl.file.Close()
+ sl.closed = true
+ return err
+ }
+
+ sl.closed = true
+ return nil
+}
+
+// min returns the minimum of two integers
+func min(a, b int) int {
+ if a < b {
+ return a
+ }
+ return b
+}
\ No newline at end of file
diff --git a/aggregator-agent/internal/migration/detection.go b/aggregator-agent/internal/migration/detection.go
index d1b7d6b..6eef1ee 100644
--- a/aggregator-agent/internal/migration/detection.go
+++ b/aggregator-agent/internal/migration/detection.go
@@ -7,10 +7,12 @@ import (
"io"
"os"
"path/filepath"
+ "strconv"
"strings"
"time"
- "github.com/Fimeg/RedFlag/aggregator/pkg/common"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/common"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/version"
)
// AgentFileInventory represents all files associated with an agent installation
@@ -26,14 +28,14 @@ type AgentFileInventory struct {
// MigrationDetection represents the result of migration detection
type MigrationDetection struct {
- CurrentAgentVersion string `json:"current_agent_version"`
- CurrentConfigVersion int `json:"current_config_version"`
- RequiresMigration bool `json:"requires_migration"`
- RequiredMigrations []string `json:"required_migrations"`
- MissingSecurityFeatures []string `json:"missing_security_features"`
+ CurrentAgentVersion string `json:"current_agent_version"`
+ CurrentConfigVersion int `json:"current_config_version"`
+ RequiresMigration bool `json:"requires_migration"`
+ RequiredMigrations []string `json:"required_migrations"`
+ MissingSecurityFeatures []string `json:"missing_security_features"`
Inventory *AgentFileInventory `json:"inventory"`
- DockerDetection *DockerDetection `json:"docker_detection,omitempty"`
- DetectionTime time.Time `json:"detection_time"`
+ DockerDetection *DockerDetection `json:"docker_detection,omitempty"`
+ DetectionTime time.Time `json:"detection_time"`
}
// SecurityFeature represents a security feature that may be missing
@@ -59,8 +61,8 @@ func NewFileDetectionConfig() *FileDetectionConfig {
OldConfigPath: "/etc/aggregator",
OldStatePath: "/var/lib/aggregator",
NewConfigPath: "/etc/redflag",
- NewStatePath: "/var/lib/redflag",
- BackupDirPattern: "/etc/redflag.backup.%s",
+ NewStatePath: "/var/lib/redflag-agent",
+ BackupDirPattern: "/var/lib/redflag-agent/migration_backups_%s",
}
}
@@ -155,15 +157,15 @@ func scanAgentFiles(config *FileDetectionConfig) (*AgentFileInventory, error) {
// Categorize files
for _, file := range files {
switch {
- case containsAny(file.Path, filePatterns["config"]):
+ case ContainsAny(file.Path, filePatterns["config"]):
inventory.ConfigFiles = append(inventory.ConfigFiles, file)
- case containsAny(file.Path, filePatterns["state"]):
+ case ContainsAny(file.Path, filePatterns["state"]):
inventory.StateFiles = append(inventory.StateFiles, file)
- case containsAny(file.Path, filePatterns["binary"]):
+ case ContainsAny(file.Path, filePatterns["binary"]):
inventory.BinaryFiles = append(inventory.BinaryFiles, file)
- case containsAny(file.Path, filePatterns["log"]):
+ case ContainsAny(file.Path, filePatterns["log"]):
inventory.LogFiles = append(inventory.LogFiles, file)
- case containsAny(file.Path, filePatterns["certificate"]):
+ case ContainsAny(file.Path, filePatterns["certificate"]):
inventory.CertificateFiles = append(inventory.CertificateFiles, file)
}
}
@@ -280,32 +282,98 @@ func readConfigVersion(configPath string) (string, int, error) {
func determineRequiredMigrations(detection *MigrationDetection, config *FileDetectionConfig) []string {
var migrations []string
+ // Check migration state to skip already completed migrations
+ configPath := filepath.Join(config.NewConfigPath, "config.json")
+ stateManager := NewStateManager(configPath)
+
// Check if old directories exist
for _, oldDir := range detection.Inventory.OldDirectoryPaths {
if _, err := os.Stat(oldDir); err == nil {
- migrations = append(migrations, "directory_migration")
+ // Check if directory migration was already completed
+ completed, err := stateManager.IsMigrationCompleted("directory_migration")
+ if err == nil && !completed {
+ migrations = append(migrations, "directory_migration")
+ }
break
}
}
- // Check config version compatibility
- if detection.CurrentConfigVersion < 4 {
- migrations = append(migrations, "config_migration")
+ // Check for legacy installation (old path migration)
+ hasLegacyDirs := false
+ for _, oldDir := range detection.Inventory.OldDirectoryPaths {
+ if _, err := os.Stat(oldDir); err == nil {
+ hasLegacyDirs = true
+ break
+ }
}
- // Check if Docker secrets migration is needed (v5)
- if detection.CurrentConfigVersion < 5 {
- migrations = append(migrations, "config_v5_migration")
+ // Legacy migration: always migrate if old directories exist
+ if hasLegacyDirs {
+ if detection.CurrentConfigVersion < 4 {
+ // Check if already completed
+ completed, err := stateManager.IsMigrationCompleted("config_migration")
+ if err == nil && !completed {
+ migrations = append(migrations, "config_migration")
+ }
+ }
+
+ // Check if Docker secrets migration is needed (v5)
+ if detection.CurrentConfigVersion < 5 {
+ // Check if already completed
+ completed, err := stateManager.IsMigrationCompleted("config_v5_migration")
+ if err == nil && !completed {
+ migrations = append(migrations, "config_v5_migration")
+ }
+ }
+ } else {
+ // Version-based migration: compare current config version with expected
+ // This handles upgrades for agents already in correct location
+ // Use version package for single source of truth
+ agentVersion := version.Version
+ expectedConfigVersionStr := version.ExtractConfigVersionFromAgent(agentVersion)
+ // Convert to int for comparison (e.g., "6" -> 6)
+ expectedConfigVersion := 6 // Default fallback
+ if expectedConfigInt, err := strconv.Atoi(expectedConfigVersionStr); err == nil {
+ expectedConfigVersion = expectedConfigInt
+ }
+
+ // If config file exists but version is old, migrate
+ if detection.CurrentConfigVersion < expectedConfigVersion {
+ if detection.CurrentConfigVersion < 4 {
+ // Check if already completed
+ completed, err := stateManager.IsMigrationCompleted("config_migration")
+ if err == nil && !completed {
+ migrations = append(migrations, "config_migration")
+ }
+ }
+
+ // Check if Docker secrets migration is needed (v5)
+ if detection.CurrentConfigVersion < 5 {
+ // Check if already completed
+ completed, err := stateManager.IsMigrationCompleted("config_v5_migration")
+ if err == nil && !completed {
+ migrations = append(migrations, "config_v5_migration")
+ }
+ }
+ }
}
// Check if Docker secrets migration is needed
if detection.DockerDetection != nil && detection.DockerDetection.MigrateToSecrets {
- migrations = append(migrations, "docker_secrets_migration")
+ // Check if already completed
+ completed, err := stateManager.IsMigrationCompleted("docker_secrets_migration")
+ if err == nil && !completed {
+ migrations = append(migrations, "docker_secrets_migration")
+ }
}
// Check if security features need to be applied
if len(detection.MissingSecurityFeatures) > 0 {
- migrations = append(migrations, "security_hardening")
+ // Check if already completed
+ completed, err := stateManager.IsMigrationCompleted("security_hardening")
+ if err == nil && !completed {
+ migrations = append(migrations, "security_hardening")
+ }
}
return migrations
@@ -389,7 +457,7 @@ func calculateFileChecksum(filePath string) (string, error) {
return fmt.Sprintf("%x", hash.Sum(nil)), nil
}
-func containsAny(path string, patterns []string) bool {
+func ContainsAny(path string, patterns []string) bool {
for _, pattern := range patterns {
if matched, _ := filepath.Match(pattern, filepath.Base(path)); matched {
return true
@@ -404,7 +472,7 @@ func isRequiredFile(path string, patterns map[string][]string) bool {
}
func shouldMigrateFile(path string, patterns map[string][]string) bool {
- return !containsAny(path, []string{"*.log", "*.tmp"})
+ return !ContainsAny(path, []string{"*.log", "*.tmp"})
}
func getFileDescription(path string) string {
@@ -444,4 +512,4 @@ func detectBinaryVersion(binaryPath string) string {
// This would involve reading binary headers or executing with --version flag
// For now, return empty
return ""
-}
\ No newline at end of file
+}
diff --git a/aggregator-agent/internal/migration/docker.go b/aggregator-agent/internal/migration/docker.go
index e0c7163..d88c4aa 100644
--- a/aggregator-agent/internal/migration/docker.go
+++ b/aggregator-agent/internal/migration/docker.go
@@ -15,7 +15,7 @@ import (
"strings"
"time"
- "github.com/Fimeg/RedFlag/aggregator/pkg/common"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/common"
)
// DockerDetection represents Docker secrets detection results
diff --git a/aggregator-agent/internal/migration/docker_executor.go b/aggregator-agent/internal/migration/docker_executor.go
index d41c621..38858eb 100644
--- a/aggregator-agent/internal/migration/docker_executor.go
+++ b/aggregator-agent/internal/migration/docker_executor.go
@@ -8,7 +8,7 @@ import (
"strings"
"time"
- "github.com/Fimeg/RedFlag/aggregator/pkg/common"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/common"
)
// DockerSecretsExecutor handles the execution of Docker secrets migration
diff --git a/aggregator-agent/internal/migration/executor.go b/aggregator-agent/internal/migration/executor.go
index 057b74a..6150534 100644
--- a/aggregator-agent/internal/migration/executor.go
+++ b/aggregator-agent/internal/migration/executor.go
@@ -7,7 +7,10 @@ import (
"strings"
"time"
- "github.com/Fimeg/RedFlag/aggregator/pkg/common"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/common"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/event"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/models"
+ "github.com/google/uuid"
)
// MigrationPlan represents a complete migration plan
@@ -36,15 +39,60 @@ type MigrationResult struct {
// MigrationExecutor handles the execution of migration plans
type MigrationExecutor struct {
- plan *MigrationPlan
- result *MigrationResult
+ plan *MigrationPlan
+ result *MigrationResult
+ eventBuffer *event.Buffer
+ agentID uuid.UUID
+ stateManager *StateManager
}
// NewMigrationExecutor creates a new migration executor
-func NewMigrationExecutor(plan *MigrationPlan) *MigrationExecutor {
+func NewMigrationExecutor(plan *MigrationPlan, configPath string) *MigrationExecutor {
return &MigrationExecutor{
- plan: plan,
- result: &MigrationResult{},
+ plan: plan,
+ result: &MigrationResult{},
+ stateManager: NewStateManager(configPath),
+ }
+}
+
+// NewMigrationExecutorWithEvents creates a new migration executor with event buffering
+func NewMigrationExecutorWithEvents(plan *MigrationPlan, eventBuffer *event.Buffer, agentID uuid.UUID, configPath string) *MigrationExecutor {
+ return &MigrationExecutor{
+ plan: plan,
+ result: &MigrationResult{},
+ eventBuffer: eventBuffer,
+ agentID: agentID,
+ stateManager: NewStateManager(configPath),
+ }
+}
+
+// bufferEvent buffers a migration failure event
+func (e *MigrationExecutor) bufferEvent(eventSubtype, severity, component, message string, metadata map[string]interface{}) {
+ if e.eventBuffer == nil {
+ return // Event buffering not enabled
+ }
+
+ // Use agent ID if available
+ var agentIDPtr *uuid.UUID
+ if e.agentID != uuid.Nil {
+ agentIDPtr = &e.agentID
+ }
+
+ event := &models.SystemEvent{
+ ID: uuid.New(),
+ AgentID: agentIDPtr,
+ EventType: "migration_failure",
+ EventSubtype: eventSubtype,
+ Severity: severity,
+ Component: component,
+ Message: message,
+ Metadata: metadata,
+ CreatedAt: time.Now(),
+ }
+
+ // Buffer the event (best effort)
+ if err := e.eventBuffer.BufferEvent(event); err != nil {
+ fmt.Printf("Warning: Failed to buffer migration event: %v\n", err)
}
}
@@ -58,6 +106,13 @@ func (e *MigrationExecutor) ExecuteMigration() (*MigrationResult, error) {
// Phase 1: Create backups
if err := e.createBackups(); err != nil {
+ e.bufferEvent("backup_creation_failure", "error", "migration_executor",
+ fmt.Sprintf("Backup creation failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "backup_path": e.plan.BackupPath,
+ "phase": "backup_creation",
+ })
return e.completeMigration(false, fmt.Errorf("backup creation failed: %w", err))
}
e.result.AppliedChanges = append(e.result.AppliedChanges, "Created backups at "+e.plan.BackupPath)
@@ -65,30 +120,69 @@ func (e *MigrationExecutor) ExecuteMigration() (*MigrationResult, error) {
// Phase 2: Directory migration
if contains(e.plan.Detection.RequiredMigrations, "directory_migration") {
if err := e.migrateDirectories(); err != nil {
+ e.bufferEvent("directory_migration_failure", "error", "migration_executor",
+ fmt.Sprintf("Directory migration failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "phase": "directory_migration",
+ })
return e.completeMigration(false, fmt.Errorf("directory migration failed: %w", err))
}
e.result.AppliedChanges = append(e.result.AppliedChanges, "Migrated directories")
+
+ // Mark directory migration as completed
+ if err := e.stateManager.MarkMigrationCompleted("directory_migration", e.plan.BackupPath, e.plan.TargetVersion); err != nil {
+ fmt.Printf("[MIGRATION] Warning: Failed to mark directory migration as completed: %v\n", err)
+ }
}
// Phase 3: Configuration migration
if contains(e.plan.Detection.RequiredMigrations, "config_migration") {
if err := e.migrateConfiguration(); err != nil {
+ e.bufferEvent("configuration_migration_failure", "error", "migration_executor",
+ fmt.Sprintf("Configuration migration failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "phase": "configuration_migration",
+ })
return e.completeMigration(false, fmt.Errorf("configuration migration failed: %w", err))
}
e.result.AppliedChanges = append(e.result.AppliedChanges, "Migrated configuration")
+
+ // Mark configuration migration as completed
+ if err := e.stateManager.MarkMigrationCompleted("config_migration", e.plan.BackupPath, e.plan.TargetVersion); err != nil {
+ fmt.Printf("[MIGRATION] Warning: Failed to mark configuration migration as completed: %v\n", err)
+ }
}
// Phase 4: Docker secrets migration (if available)
if contains(e.plan.Detection.RequiredMigrations, "docker_secrets_migration") {
if e.plan.Detection.DockerDetection == nil {
+ e.bufferEvent("docker_migration_failure", "error", "migration_executor",
+ "Docker secrets migration requested but detection data missing",
+ map[string]interface{}{
+ "error": "missing detection data",
+ "phase": "docker_secrets_migration",
+ })
return e.completeMigration(false, fmt.Errorf("docker secrets migration requested but detection data missing"))
}
dockerExecutor := NewDockerSecretsExecutor(e.plan.Detection.DockerDetection, e.plan.Config)
if err := dockerExecutor.ExecuteDockerSecretsMigration(); err != nil {
+ e.bufferEvent("docker_migration_failure", "error", "migration_executor",
+ fmt.Sprintf("Docker secrets migration failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "phase": "docker_secrets_migration",
+ })
return e.completeMigration(false, fmt.Errorf("docker secrets migration failed: %w", err))
}
e.result.AppliedChanges = append(e.result.AppliedChanges, "Migrated to Docker secrets")
+
+ // Mark docker secrets migration as completed
+ if err := e.stateManager.MarkMigrationCompleted("docker_secrets_migration", e.plan.BackupPath, e.plan.TargetVersion); err != nil {
+ fmt.Printf("[MIGRATION] Warning: Failed to mark docker secrets migration as completed: %v\n", err)
+ }
}
// Phase 5: Security hardening
@@ -98,11 +192,22 @@ func (e *MigrationExecutor) ExecuteMigration() (*MigrationResult, error) {
fmt.Sprintf("Security hardening incomplete: %v", err))
} else {
e.result.AppliedChanges = append(e.result.AppliedChanges, "Applied security hardening")
+
+ // Mark security hardening as completed
+ if err := e.stateManager.MarkMigrationCompleted("security_hardening", e.plan.BackupPath, e.plan.TargetVersion); err != nil {
+ fmt.Printf("[MIGRATION] Warning: Failed to mark security hardening as completed: %v\n", err)
+ }
}
}
// Phase 6: Validation
if err := e.validateMigration(); err != nil {
+ e.bufferEvent("migration_validation_failure", "error", "migration_executor",
+ fmt.Sprintf("Migration validation failed: %v", err),
+ map[string]interface{}{
+ "error": err.Error(),
+ "phase": "validation",
+ })
return e.completeMigration(false, fmt.Errorf("migration validation failed: %w", err))
}
@@ -252,27 +357,78 @@ func (e *MigrationExecutor) collectAllFiles() []common.AgentFile {
}
func (e *MigrationExecutor) backupFile(file common.AgentFile, backupPath string) error {
- relPath, err := filepath.Rel(e.plan.Config.OldConfigPath, file.Path)
- if err != nil {
- // Try relative to old state path
- relPath, err = filepath.Rel(e.plan.Config.OldStatePath, file.Path)
- if err != nil {
- relPath = filepath.Base(file.Path)
+ // Check if file exists before attempting backup
+ if _, err := os.Stat(file.Path); err != nil {
+ if os.IsNotExist(err) {
+ // File doesn't exist, log and skip
+ fmt.Printf("[MIGRATION] [agent] [migration_executor] File does not exist, skipping backup: %s\n", file.Path)
+ e.bufferEvent("backup_file_missing", "warning", "migration_executor",
+ fmt.Sprintf("File does not exist, skipping backup: %s", file.Path),
+ map[string]interface{}{
+ "file_path": file.Path,
+ "phase": "backup",
+ })
+ return nil
+ }
+ return fmt.Errorf("migration: failed to stat file %s: %w", file.Path, err)
+ }
+
+ // Clean paths to fix trailing slash issues
+ cleanOldConfig := filepath.Clean(e.plan.Config.OldConfigPath)
+ cleanOldState := filepath.Clean(e.plan.Config.OldStatePath)
+ cleanPath := filepath.Clean(file.Path)
+ var relPath string
+ var err error
+
+ // Try to get relative path based on expected file location
+ // If file is under old config path, use that as base
+ if strings.HasPrefix(cleanPath, cleanOldConfig) {
+ relPath, err = filepath.Rel(cleanOldConfig, cleanPath)
+ if err != nil || strings.Contains(relPath, "..") {
+ // Fallback to filename if path traversal or error
+ relPath = filepath.Base(cleanPath)
+ }
+ } else if strings.HasPrefix(cleanPath, cleanOldState) {
+ relPath, err = filepath.Rel(cleanOldState, cleanPath)
+ if err != nil || strings.Contains(relPath, "..") {
+ // Fallback to filename if path traversal or error
+ relPath = filepath.Base(cleanPath)
+ }
+ } else {
+ // File is not in expected old locations - use just the filename
+ // This happens for files already in the new location
+ relPath = filepath.Base(cleanPath)
+ // Add subdirectory based on file type to avoid collisions
+ switch {
+ case ContainsAny(cleanPath, []string{"config.json", "agent.key", "server.key", "ca.crt"}):
+ relPath = filepath.Join("config", relPath)
+ case ContainsAny(cleanPath, []string{
+ "pending_acks.json", "public_key.cache", "last_scan.json", "metrics.json"}):
+ relPath = filepath.Join("state", relPath)
}
}
- backupFilePath := filepath.Join(backupPath, relPath)
+ // Ensure backup path is clean
+ cleanBackupPath := filepath.Clean(backupPath)
+ backupFilePath := filepath.Join(cleanBackupPath, relPath)
+ backupFilePath = filepath.Clean(backupFilePath)
backupDir := filepath.Dir(backupFilePath)
+ // Final safety check
+ if strings.Contains(backupFilePath, "..") {
+ return fmt.Errorf("migration: backup path contains parent directory reference: %s", backupFilePath)
+ }
+
if err := os.MkdirAll(backupDir, 0755); err != nil {
- return fmt.Errorf("failed to create backup directory: %w", err)
+ return fmt.Errorf("migration: failed to create backup directory %s: %w", backupDir, err)
}
// Copy file to backup location
- if err := copyFile(file.Path, backupFilePath); err != nil {
- return fmt.Errorf("failed to copy file to backup: %w", err)
+ if err := copyFile(cleanPath, backupFilePath); err != nil {
+ return fmt.Errorf("migration: failed to copy file to backup: %w", err)
}
+ fmt.Printf("[MIGRATION] [agent] [migration_executor] Successfully backed up: %s\n", cleanPath)
return nil
}
@@ -349,6 +505,11 @@ func (e *MigrationExecutor) completeMigration(success bool, err error) (*Migrati
if e.result.RollbackAvailable {
fmt.Printf("[MIGRATION] π¦ Rollback available at: %s\n", e.result.BackupPath)
}
+
+ // Clean up old directories after successful migration
+ if err := e.stateManager.CleanupOldDirectories(); err != nil {
+ fmt.Printf("[MIGRATION] Warning: Failed to cleanup old directories: %v\n", err)
+ }
} else {
fmt.Printf("[MIGRATION] β Migration failed after %v\n", e.result.Duration)
if len(e.result.Errors) > 0 {
diff --git a/aggregator-agent/internal/models/system_event.go b/aggregator-agent/internal/models/system_event.go
new file mode 100644
index 0000000..bf5945b
--- /dev/null
+++ b/aggregator-agent/internal/models/system_event.go
@@ -0,0 +1,79 @@
+package models
+
+import (
+ "time"
+
+ "github.com/google/uuid"
+)
+
+// SystemEvent represents a unified event log entry for all system events
+// This is a copy of the server model to avoid circular dependencies
+type SystemEvent struct {
+ ID uuid.UUID `json:"id" db:"id"`
+ AgentID *uuid.UUID `json:"agent_id,omitempty" db:"agent_id"` // Pointer to allow NULL for server events
+ EventType string `json:"event_type" db:"event_type"` // e.g., 'agent_update', 'agent_startup', 'server_build'
+ EventSubtype string `json:"event_subtype" db:"event_subtype"` // e.g., 'success', 'failed', 'info', 'warning'
+ Severity string `json:"severity" db:"severity"` // 'info', 'warning', 'error', 'critical'
+ Component string `json:"component" db:"component"` // 'agent', 'server', 'build', 'download', 'config', etc.
+ Message string `json:"message" db:"message"`
+ Metadata map[string]interface{} `json:"metadata,omitempty" db:"metadata"` // JSONB for structured data
+ CreatedAt time.Time `json:"created_at" db:"created_at"`
+}
+
+// Event type constants
+const (
+ EventTypeAgentStartup = "agent_startup"
+ EventTypeAgentRegistration = "agent_registration"
+ EventTypeAgentCheckIn = "agent_checkin"
+ EventTypeAgentScan = "agent_scan"
+ EventTypeAgentUpdate = "agent_update"
+ EventTypeAgentConfig = "agent_config"
+ EventTypeAgentMigration = "agent_migration"
+ EventTypeAgentShutdown = "agent_shutdown"
+ EventTypeServerBuild = "server_build"
+ EventTypeServerDownload = "server_download"
+ EventTypeServerConfig = "server_config"
+ EventTypeServerAuth = "server_auth"
+ EventTypeDownload = "download"
+ EventTypeMigration = "migration"
+ EventTypeError = "error"
+)
+
+// Event subtype constants
+const (
+ SubtypeSuccess = "success"
+ SubtypeFailed = "failed"
+ SubtypeInfo = "info"
+ SubtypeWarning = "warning"
+ SubtypeCritical = "critical"
+ SubtypeDownloadFailed = "download_failed"
+ SubtypeValidationFailed = "validation_failed"
+ SubtypeConfigCorrupted = "config_corrupted"
+ SubtypeMigrationNeeded = "migration_needed"
+ SubtypePanicRecovered = "panic_recovered"
+ SubtypeTokenExpired = "token_expired"
+ SubtypeNetworkTimeout = "network_timeout"
+ SubtypePermissionDenied = "permission_denied"
+ SubtypeServiceUnavailable = "service_unavailable"
+)
+
+// Severity constants
+const (
+ SeverityInfo = "info"
+ SeverityWarning = "warning"
+ SeverityError = "error"
+ SeverityCritical = "critical"
+)
+
+// Component constants
+const (
+ ComponentAgent = "agent"
+ ComponentServer = "server"
+ ComponentBuild = "build"
+ ComponentDownload = "download"
+ ComponentConfig = "config"
+ ComponentDatabase = "database"
+ ComponentNetwork = "network"
+ ComponentSecurity = "security"
+ ComponentMigration = "migration"
+)
\ No newline at end of file
diff --git a/aggregator-agent/internal/orchestrator/command_handler.go b/aggregator-agent/internal/orchestrator/command_handler.go
new file mode 100644
index 0000000..1fcf6a4
--- /dev/null
+++ b/aggregator-agent/internal/orchestrator/command_handler.go
@@ -0,0 +1,104 @@
+package orchestrator
+
+import (
+ "crypto/ed25519"
+ "fmt"
+ "log"
+
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/client"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/config"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/crypto"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/logging"
+ "github.com/google/uuid"
+)
+
+// CommandHandler handles command processing with signature verification
+type CommandHandler struct {
+ verifier *crypto.CommandVerifier
+ securityLogger *logging.SecurityLogger
+ serverPublicKey ed25519.PublicKey
+ logger *log.Logger
+}
+
+// CommandSigningConfig holds configuration for command signing
+type CommandSigningConfig struct {
+ Enabled bool `json:"enabled" env:"REDFLAG_AGENT_COMMAND_SIGNING_ENABLED" default:"true"`
+ EnforcementMode string `json:"enforcement_mode" env:"REDFLAG_AGENT_COMMAND_ENFORCEMENT_MODE" default:"strict"` // strict, warning, disabled
+}
+
+// NewCommandHandler creates a new command handler
+func NewCommandHandler(cfg *config.Config, securityLogger *logging.SecurityLogger, logger *log.Logger) (*CommandHandler, error) {
+ handler := &CommandHandler{
+ securityLogger: securityLogger,
+ logger: logger,
+ verifier: crypto.NewCommandVerifier(),
+ }
+
+ // Load server public key if command signing is enabled
+ if cfg.CommandSigning.Enabled {
+ publicKey, err := crypto.LoadCachedPublicKey()
+ if err != nil {
+ // Try to fetch from server if not cached
+ publicKey, err = crypto.GetPublicKey(cfg.ServerURL)
+ if err != nil {
+ return nil, fmt.Errorf("failed to load server public key: %w", err)
+ }
+ }
+ handler.serverPublicKey = publicKey
+ }
+
+ return handler, nil
+}
+
+// ProcessCommand processes a command with signature verification
+func (h *CommandHandler) ProcessCommand(cmd client.CommandItem, cfg *config.Config, agentID uuid.UUID) error {
+ config := cfg.CommandSigning
+
+ if config.Enabled {
+ if config.EnforcementMode == "strict" {
+ // Strict mode: Verification is required
+ if cmd.Signature == "" {
+ err := fmt.Errorf("strict enforcement enabled but command not signed")
+ h.securityLogger.LogCommandVerificationFailure(cmd.ID, "missing signature")
+ return fmt.Errorf("command verification failed: %w", err)
+ }
+
+ err := h.verifier.VerifyCommand(cmd, h.serverPublicKey)
+ if err != nil {
+ h.securityLogger.LogCommandVerificationFailure(cmd.ID, err.Error())
+ return fmt.Errorf("command verification failed: %w", err)
+ }
+ h.securityLogger.LogCommandVerificationSuccess(cmd.ID)
+ } else if config.EnforcementMode == "warning" {
+ // Warning mode: Log failures but allow execution
+ if cmd.Signature != "" {
+ err := h.verifier.VerifyCommand(cmd, h.serverPublicKey)
+ if err != nil {
+ h.logger.Printf("[WARNING] Command verification failed but allowed in warning mode: %v", err)
+ h.securityLogger.LogCommandVerificationFailure(cmd.ID, err.Error())
+ } else {
+ h.securityLogger.LogCommandVerificationSuccess(cmd.ID)
+ }
+ } else {
+ h.logger.Printf("[WARNING] Command not signed but allowed in warning mode")
+ }
+ }
+ // disabled mode: Skip verification entirely
+ } else if cmd.Signature != "" {
+ // Signing is disabled but command has signature - log info
+ h.logger.Printf("[INFO] Command has signature but signing is disabled")
+ }
+
+ return nil
+}
+
+// UpdateServerPublicKey updates the cached server public key
+func (h *CommandHandler) UpdateServerPublicKey(serverURL string) error {
+ publicKey, err := crypto.FetchAndCacheServerPublicKey(serverURL)
+ if err != nil {
+ return fmt.Errorf("failed to update server public key: %w", err)
+ }
+ h.serverPublicKey = publicKey
+ h.logger.Printf("Server public key updated successfully")
+ return nil
+}
\ No newline at end of file
diff --git a/aggregator-agent/internal/orchestrator/orchestrator.go b/aggregator-agent/internal/orchestrator/orchestrator.go
index b446b5c..4836a80 100644
--- a/aggregator-agent/internal/orchestrator/orchestrator.go
+++ b/aggregator-agent/internal/orchestrator/orchestrator.go
@@ -9,6 +9,8 @@ import (
"github.com/Fimeg/RedFlag/aggregator-agent/internal/circuitbreaker"
"github.com/Fimeg/RedFlag/aggregator-agent/internal/client"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/event"
+ "github.com/Fimeg/RedFlag/aggregator-agent/internal/models"
)
// Scanner represents a generic update scanner
@@ -42,8 +44,9 @@ type ScanResult struct {
// Orchestrator manages and coordinates multiple scanners
type Orchestrator struct {
- scanners map[string]*ScannerConfig
- mu sync.RWMutex
+ scanners map[string]*ScannerConfig
+ eventBuffer *event.Buffer
+ mu sync.RWMutex
}
// NewOrchestrator creates a new scanner orchestrator
@@ -53,6 +56,14 @@ func NewOrchestrator() *Orchestrator {
}
}
+// NewOrchestratorWithEvents creates a new scanner orchestrator with event buffering
+func NewOrchestratorWithEvents(buffer *event.Buffer) *Orchestrator {
+ return &Orchestrator{
+ scanners: make(map[string]*ScannerConfig),
+ eventBuffer: buffer,
+ }
+}
+
// RegisterScanner adds a scanner to the orchestrator
func (o *Orchestrator) RegisterScanner(name string, scanner Scanner, cb *circuitbreaker.CircuitBreaker, timeout time.Duration, enabled bool) {
o.mu.Lock()
@@ -135,6 +146,27 @@ func (o *Orchestrator) executeScan(ctx context.Context, name string, cfg *Scanne
if !cfg.Enabled {
result.Status = "disabled"
log.Printf("[%s] Scanner disabled via configuration", name)
+
+ // Buffer disabled event if event buffer is available
+ if o.eventBuffer != nil {
+ event := &models.SystemEvent{
+ EventType: "agent_scan",
+ EventSubtype: "skipped",
+ Severity: "info",
+ Component: "scanner",
+ Message: fmt.Sprintf("Scanner %s is disabled via configuration", name),
+ Metadata: map[string]interface{}{
+ "scanner_name": name,
+ "status": "disabled",
+ "reason": "configuration",
+ },
+ CreatedAt: time.Now(),
+ }
+ if err := o.eventBuffer.BufferEvent(event); err != nil {
+ log.Printf("Warning: Failed to buffer scanner disabled event: %v", err)
+ }
+ }
+
return result
}
@@ -142,6 +174,27 @@ func (o *Orchestrator) executeScan(ctx context.Context, name string, cfg *Scanne
if !cfg.Scanner.IsAvailable() {
result.Status = "unavailable"
log.Printf("[%s] Scanner not available on this system", name)
+
+ // Buffer unavailable event if event buffer is available
+ if o.eventBuffer != nil {
+ event := &models.SystemEvent{
+ EventType: "agent_scan",
+ EventSubtype: "skipped",
+ Severity: "info",
+ Component: "scanner",
+ Message: fmt.Sprintf("Scanner %s is not available on this system", name),
+ Metadata: map[string]interface{}{
+ "scanner_name": name,
+ "status": "unavailable",
+ "reason": "system_incompatible",
+ },
+ CreatedAt: time.Now(),
+ }
+ if err := o.eventBuffer.BufferEvent(event); err != nil {
+ log.Printf("Warning: Failed to buffer scanner unavailable event: %v", err)
+ }
+ }
+
return result
}
@@ -185,12 +238,55 @@ func (o *Orchestrator) executeScan(ctx context.Context, name string, cfg *Scanne
result.Error = err
result.Status = "failed"
log.Printf("[%s] Scan failed: %v", name, err)
+
+ // Buffer event if event buffer is available
+ if o.eventBuffer != nil {
+ event := &models.SystemEvent{
+ EventType: "agent_scan",
+ EventSubtype: "failed",
+ Severity: "error",
+ Component: "scanner",
+ Message: fmt.Sprintf("Scanner %s failed: %v", name, err),
+ Metadata: map[string]interface{}{
+ "scanner_name": name,
+ "error_type": "scan_failed",
+ "error_details": err.Error(),
+ "duration_ms": result.Duration.Milliseconds(),
+ },
+ CreatedAt: time.Now(),
+ }
+ if err := o.eventBuffer.BufferEvent(event); err != nil {
+ log.Printf("Warning: Failed to buffer scanner failure event: %v", err)
+ }
+ }
+
return result
}
result.Updates = updates
result.Status = "success"
log.Printf("[%s] Scan completed: found %d updates (took %v)", name, len(updates), result.Duration)
+
+ // Buffer success event if event buffer is available
+ if o.eventBuffer != nil {
+ event := &models.SystemEvent{
+ EventType: "agent_scan",
+ EventSubtype: "completed",
+ Severity: "info",
+ Component: "scanner",
+ Message: fmt.Sprintf("Scanner %s completed successfully", name),
+ Metadata: map[string]interface{}{
+ "scanner_name": name,
+ "updates_found": len(updates),
+ "duration_ms": result.Duration.Milliseconds(),
+ "status": "success",
+ },
+ CreatedAt: time.Now(),
+ }
+ if err := o.eventBuffer.BufferEvent(event); err != nil {
+ log.Printf("Warning: Failed to buffer scanner success event: %v", err)
+ }
+ }
return result
}
diff --git a/aggregator-agent/internal/service/windows.go b/aggregator-agent/internal/service/windows.go
index a593106..6d8aba3 100644
--- a/aggregator-agent/internal/service/windows.go
+++ b/aggregator-agent/internal/service/windows.go
@@ -536,7 +536,7 @@ func (s *redflagService) renewTokenIfNeeded(apiClient *client.Client, err error)
tempClient := client.NewClient(s.agent.ServerURL, "")
// Attempt to renew access token using refresh token
- if err := tempClient.RenewToken(s.agent.AgentID, s.agent.RefreshToken); err != nil {
+ if err := tempClient.RenewToken(s.agent.AgentID, s.agent.RefreshToken, AgentVersion); err != nil {
log.Printf("β Refresh token renewal failed: %v", err)
elog.Error(1, fmt.Sprintf("Refresh token renewal failed: %v", err))
log.Printf("π‘ Refresh token may be expired (>90 days) - re-registration required")
diff --git a/aggregator-agent/internal/version/version.go b/aggregator-agent/internal/version/version.go
new file mode 100644
index 0000000..2d5a688
--- /dev/null
+++ b/aggregator-agent/internal/version/version.go
@@ -0,0 +1,123 @@
+package version
+
+import (
+ "fmt"
+ "runtime"
+ "strings"
+ "time"
+)
+
+// Build-time injected version information
+// These will be set via ldflags during build (SERVER AUTHORITY)
+var (
+ // Version is the agent version (e.g., "0.1.23.6")
+ // Injected by server during build: -ldflags "-X github.com/redflag/redflag/internal/version.Version=0.1.23.6"
+ Version = "dev"
+
+ // ConfigVersion is the config schema version this agent expects (e.g., "6")
+ // Injected by server during build: -ldflags "-X github.com/redflag/redflag/internal/version.ConfigVersion=6"
+ ConfigVersion = "dev"
+
+ // BuildTime is when this binary was built
+ BuildTime = "unknown"
+
+ // GitCommit is the git commit hash
+ GitCommit = "unknown"
+
+ // GoVersion is the Go version used to build
+ GoVersion = runtime.Version()
+)
+
+// ExtractConfigVersionFromAgent extracts the config version from the agent version
+// Agent version format: v0.1.23.6 where the fourth octet (.6) maps to config version
+// This provides the traditional mapping when only agent version is available
+func ExtractConfigVersionFromAgent(agentVer string) string {
+ // Strip 'v' prefix if present
+ cleanVersion := strings.TrimPrefix(agentVer, "v")
+
+ // Split version parts
+ parts := strings.Split(cleanVersion, ".")
+ if len(parts) == 4 {
+ // Return the fourth octet as the config version
+ // v0.1.23.6 β "6"
+ return parts[3]
+ }
+
+ // If we have a build-time injected ConfigVersion, use it
+ if ConfigVersion != "dev" {
+ return ConfigVersion
+ }
+
+ // Default fallback
+ return "6"
+}
+
+// Info holds complete version information
+type Info struct {
+ AgentVersion string `json:"agent_version"`
+ ConfigVersion string `json:"config_version"`
+ BuildTime string `json:"build_time"`
+ GitCommit string `json:"git_commit"`
+ GoVersion string `json:"go_version"`
+ BuildTimestamp int64 `json:"build_timestamp"`
+}
+
+// GetInfo returns complete version information
+func GetInfo() Info {
+ // Parse build time if available
+ timestamp := time.Now().Unix()
+ if BuildTime != "unknown" {
+ if t, err := time.Parse(time.RFC3339, BuildTime); err == nil {
+ timestamp = t.Unix()
+ }
+ }
+
+ return Info{
+ AgentVersion: Version,
+ ConfigVersion: ConfigVersion,
+ BuildTime: BuildTime,
+ GitCommit: GitCommit,
+ GoVersion: GoVersion,
+ BuildTimestamp: timestamp,
+ }
+}
+
+// String returns a human-readable version string
+func String() string {
+ return fmt.Sprintf("RedFlag Agent v%s (config v%s)", Version, ConfigVersion)
+}
+
+// FullString returns detailed version information
+func FullString() string {
+ info := GetInfo()
+ return fmt.Sprintf("RedFlag Agent v%s (config v%s)\n"+
+ "Built: %s\n"+
+ "Commit: %s\n"+
+ "Go: %s",
+ info.AgentVersion,
+ info.ConfigVersion,
+ info.BuildTime,
+ info.GitCommit,
+ info.GoVersion)
+}
+
+// CheckCompatible checks if the given config version is compatible with this agent
+func CheckCompatible(configVer string) error {
+ if configVer == "" {
+ return fmt.Errorf("config version is empty")
+ }
+
+ // For now, require exact match
+ // In the future, we may support backward/forward compatibility matrices
+ if configVer != ConfigVersion {
+ return fmt.Errorf("config version mismatch: agent expects v%s, config has v%s",
+ ConfigVersion, configVer)
+ }
+
+ return nil
+}
+
+// Valid checks if version information is properly set
+func Valid() bool {
+ return Version != "dev" && ConfigVersion != "dev"
+}
\ No newline at end of file
diff --git a/aggregator-server/Dockerfile b/aggregator-server/Dockerfile
index b836734..cb117b6 100644
--- a/aggregator-server/Dockerfile
+++ b/aggregator-server/Dockerfile
@@ -1,17 +1,26 @@
# Stage 1: Build server binary
-FROM golang:1.23-alpine AS server-builder
+FROM golang:1.24-alpine AS server-builder
WORKDIR /app
+
+# Install git for module resolution
+RUN apk add --no-cache git
+
+# Copy go.mod and go.sum
COPY aggregator-server/go.mod aggregator-server/go.sum ./
RUN go mod download
-COPY aggregator-server/ .
+COPY aggregator-server/ ./
RUN CGO_ENABLED=0 go build -o redflag-server cmd/server/main.go
# Stage 2: Build agent binaries for all platforms
-FROM golang:1.23-alpine AS agent-builder
+FROM golang:1.24-alpine AS agent-builder
WORKDIR /build
+
+# Install git for module resolution
+RUN apk add --no-cache git
+
# Copy agent source code
COPY aggregator-agent/ ./
@@ -30,7 +39,7 @@ RUN CGO_ENABLED=0 GOOS=windows GOARCH=arm64 go build -o binaries/windows-arm64/r
# Stage 3: Final image with server and all agent binaries
FROM alpine:latest
-RUN apk --no-cache add ca-certificates tzdata
+RUN apk --no-cache add ca-certificates tzdata bash
WORKDIR /app
# Copy server binary
@@ -40,6 +49,11 @@ COPY --from=server-builder /app/internal/database ./internal/database
# Copy all agent binaries
COPY --from=agent-builder /build/binaries ./binaries
+# Copy and setup entrypoint script
+COPY aggregator-server/docker-entrypoint.sh /usr/local/bin/
+RUN chmod +x /usr/local/bin/docker-entrypoint.sh
+
EXPOSE 8080
+ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["./redflag-server"]
\ No newline at end of file
diff --git a/aggregator-server/cmd/server/main.go b/aggregator-server/cmd/server/main.go
index 9d11d44..c484b04 100644
--- a/aggregator-server/cmd/server/main.go
+++ b/aggregator-server/cmd/server/main.go
@@ -16,6 +16,7 @@ import (
"github.com/Fimeg/RedFlag/aggregator-server/internal/config"
"github.com/Fimeg/RedFlag/aggregator-server/internal/database"
"github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/logging"
"github.com/Fimeg/RedFlag/aggregator-server/internal/scheduler"
"github.com/Fimeg/RedFlag/aggregator-server/internal/services"
"github.com/gin-gonic/gin"
@@ -46,6 +47,46 @@ func validateSigningService(signingService *services.SigningService) error {
return nil
}
+// isSetupComplete checks if the server has been fully configured
+// Returns true if all required components are ready for production
+// Components checked: admin credentials, signing keys, database connectivity
+func isSetupComplete(cfg *config.Config, signingService *services.SigningService, db *database.DB) bool {
+ // Check if signing keys are configured
+ if cfg.SigningPrivateKey == "" {
+ log.Printf("Setup incomplete: Signing keys not configured")
+ return false
+ }
+
+ // Check if admin password is configured (not empty)
+ if cfg.Admin.Password == "" {
+ log.Printf("Setup incomplete: Admin password not configured")
+ return false
+ }
+
+ // Check if JWT secret is configured
+ if cfg.Admin.JWTSecret == "" {
+ log.Printf("Setup incomplete: JWT secret not configured")
+ return false
+ }
+
+ // Check if database connection is working
+ if err := db.DB.Ping(); err != nil {
+ log.Printf("Setup incomplete: Database not accessible: %v", err)
+ return false
+ }
+
+ // Check if database has been migrated (check for agents table)
+ var agentCount int
+ if err := db.DB.Get(&agentCount, "SELECT COUNT(*) FROM information_schema.tables WHERE table_name = 'agents'"); err != nil {
+ log.Printf("Setup incomplete: Database migrations not complete - agents table does not exist")
+ return false
+ }
+
+ // All critical checks passed
+ log.Printf("Setup validation passed: All required components configured")
+ return true
+}
+
func startWelcomeModeServer() {
setupHandler := handlers.NewSetupHandler("/app/config")
router := gin.Default()
@@ -70,6 +111,7 @@ func startWelcomeModeServer() {
// Setup endpoint for web configuration
router.POST("/api/setup/configure", setupHandler.ConfigureServer)
router.POST("/api/setup/generate-keys", setupHandler.GenerateSigningKeys)
+ router.POST("/api/setup/configure-secrets", setupHandler.ConfigureSecrets)
// Setup endpoint for web configuration
router.GET("/setup", setupHandler.ShowSetupPage)
@@ -138,7 +180,7 @@ func main() {
if err := db.Migrate(migrationsPath); err != nil {
log.Fatal("Migration failed:", err)
}
- fmt.Printf("β
Database migrations completed\n")
+ fmt.Printf("[OK] Database migrations completed\n")
return
}
@@ -149,25 +191,21 @@ func main() {
// In production, you might want to handle this more gracefully
fmt.Printf("Warning: Migration failed (tables may already exist): %v\n", err)
}
+ fmt.Println("[OK] Database migrations completed")
- // Initialize queries
agentQueries := queries.NewAgentQueries(db.DB)
updateQueries := queries.NewUpdateQueries(db.DB)
commandQueries := queries.NewCommandQueries(db.DB)
refreshTokenQueries := queries.NewRefreshTokenQueries(db.DB)
registrationTokenQueries := queries.NewRegistrationTokenQueries(db.DB)
- userQueries := queries.NewUserQueries(db.DB)
subsystemQueries := queries.NewSubsystemQueries(db.DB)
agentUpdateQueries := queries.NewAgentUpdateQueries(db.DB)
metricsQueries := queries.NewMetricsQueries(db.DB.DB)
dockerQueries := queries.NewDockerQueries(db.DB.DB)
+ adminQueries := queries.NewAdminQueries(db.DB)
- // Ensure admin user exists
- if err := userQueries.EnsureAdminUser(cfg.Admin.Username, cfg.Admin.Username+"@redflag.local", cfg.Admin.Password); err != nil {
- fmt.Printf("Warning: Failed to create admin user: %v\n", err)
- } else {
- fmt.Println("β
Admin user ensured")
- }
+ // Create PackageQueries for accessing signed agent update packages
+ packageQueries := queries.NewPackageQueries(db.DB)
// Initialize services
timezoneService := services.NewTimezoneService(cfg)
@@ -197,23 +235,82 @@ func main() {
log.Printf("[WARNING] No signing private key configured - agent update signing disabled")
log.Printf("[INFO] Generate keys: POST /api/setup/generate-keys")
}
+ // Initialize default security settings (critical for v0.2.x)
+ fmt.Println("[OK] Initializing default security settings...")
+ securitySettingsQueries := queries.NewSecuritySettingsQueries(db.DB)
+ securitySettingsService, err := services.NewSecuritySettingsService(securitySettingsQueries, signingService)
+ if err != nil {
+ fmt.Printf("Warning: Failed to create security settings service: %v\n", err)
+ fmt.Println("Security settings will need to be configured manually via the dashboard")
+ } else if err := securitySettingsService.InitializeDefaultSettings(); err != nil {
+ fmt.Printf("Warning: Failed to initialize default security settings: %v\n", err)
+ fmt.Println("Security settings will need to be configured manually via the dashboard")
+ } else {
+ fmt.Println("[OK] Default security settings initialized")
+ }
+
+ // Check if setup is complete
+ if !isSetupComplete(cfg, signingService, db) {
+ serverAddr := cfg.Server.Host
+ if serverAddr == "" {
+ serverAddr = "localhost"
+ }
+ log.Printf("Server setup incomplete - starting welcome mode")
+ log.Printf("Setup required: Admin credentials, signing keys, and database configuration")
+ log.Printf("Access setup at: http://%s:%d/setup", serverAddr, cfg.Server.Port)
+ startWelcomeModeServer()
+ return
+ }
+
+ // Initialize admin user from .env configuration
+ fmt.Println("[OK] Initializing admin user...")
+ if err := adminQueries.CreateAdminIfNotExists(cfg.Admin.Username, cfg.Admin.Email, cfg.Admin.Password); err != nil {
+ log.Printf("[ERROR] Failed to initialize admin user: %v", err)
+ } else {
+ // Update admin password from .env (runs on every startup to keep in sync)
+ if err := adminQueries.UpdateAdminPassword(cfg.Admin.Username, cfg.Admin.Password); err != nil {
+ log.Printf("[WARNING] Failed to update admin password: %v", err)
+ } else {
+ fmt.Println("[OK] Admin user initialized")
+ }
+ }
+
+ // Initialize security logger
+ secConfig := logging.SecurityLogConfig{
+ Enabled: true, // Could be configurable in the future
+ Level: "warning",
+ LogSuccesses: false,
+ FilePath: "/var/log/redflag/security.json",
+ MaxSizeMB: 100,
+ MaxFiles: 10,
+ RetentionDays: 90,
+ LogToDatabase: true,
+ HashIPAddresses: true,
+ }
+ securityLogger, err := logging.NewSecurityLogger(secConfig, db.DB)
+ if err != nil {
+ log.Printf("Failed to initialize security logger: %v", err)
+ securityLogger = nil
+ }
// Initialize rate limiter
rateLimiter := middleware.NewRateLimiter()
- // Initialize handlers
- agentHandler := handlers.NewAgentHandler(agentQueries, commandQueries, refreshTokenQueries, registrationTokenQueries, subsystemQueries, cfg.CheckInInterval, cfg.LatestAgentVersion)
- updateHandler := handlers.NewUpdateHandler(updateQueries, agentQueries, commandQueries, agentHandler)
- authHandler := handlers.NewAuthHandler(cfg.Admin.JWTSecret, userQueries)
+ // Initialize handlers that don't depend on agentHandler (can be created now)
+ authHandler := handlers.NewAuthHandler(cfg.Admin.JWTSecret, adminQueries)
statsHandler := handlers.NewStatsHandler(agentQueries, updateQueries)
settingsHandler := handlers.NewSettingsHandler(timezoneService)
- dockerHandler := handlers.NewDockerHandler(updateQueries, agentQueries, commandQueries)
+ dockerHandler := handlers.NewDockerHandler(updateQueries, agentQueries, commandQueries, signingService, securityLogger)
registrationTokenHandler := handlers.NewRegistrationTokenHandler(registrationTokenQueries, agentQueries, cfg)
rateLimitHandler := handlers.NewRateLimitHandler(rateLimiter)
- downloadHandler := handlers.NewDownloadHandler(filepath.Join("/app"), cfg)
- subsystemHandler := handlers.NewSubsystemHandler(subsystemQueries, commandQueries)
+ downloadHandler := handlers.NewDownloadHandler(filepath.Join("/app"), cfg, packageQueries)
+ subsystemHandler := handlers.NewSubsystemHandler(subsystemQueries, commandQueries, signingService, securityLogger)
metricsHandler := handlers.NewMetricsHandler(metricsQueries, agentQueries, commandQueries)
dockerReportsHandler := handlers.NewDockerReportsHandler(dockerQueries, agentQueries, commandQueries)
+ agentSetupHandler := handlers.NewAgentSetupHandler(agentQueries)
+
+ // Initialize scanner config handler (for user-configurable scanner timeouts)
+ scannerConfigHandler := handlers.NewScannerConfigHandler(db.DB)
// Initialize verification handler
var verificationHandler *handlers.VerificationHandler
@@ -234,18 +331,20 @@ func main() {
}
}
- // Initialize agent update handler
- var agentUpdateHandler *handlers.AgentUpdateHandler
- if signingService != nil {
- agentUpdateHandler = handlers.NewAgentUpdateHandler(agentQueries, agentUpdateQueries, commandQueries, signingService, updateNonceService, agentHandler)
- }
-
// Initialize system handler
systemHandler := handlers.NewSystemHandler(signingService)
// Initialize security handler
securityHandler := handlers.NewSecurityHandler(signingService, agentQueries, commandQueries)
+ // Initialize security settings service and handler
+ securitySettingsService, err = services.NewSecuritySettingsService(securitySettingsQueries, signingService)
+ if err != nil {
+ log.Printf("[ERROR] Failed to initialize security settings service: %v", err)
+ securitySettingsService = nil
+ } else {
+ log.Printf("[OK] Security settings service initialized")
+ }
// Setup router
router := gin.Default()
@@ -272,156 +371,25 @@ func main() {
api.GET("/public-key", rateLimiter.RateLimit("public_access", middleware.KeyByIP), systemHandler.GetPublicKey)
api.GET("/info", rateLimiter.RateLimit("public_access", middleware.KeyByIP), systemHandler.GetSystemInfo)
- // Public routes (no authentication required, with rate limiting)
- api.POST("/agents/register", rateLimiter.RateLimit("agent_registration", middleware.KeyByIP), agentHandler.RegisterAgent)
- api.POST("/agents/renew", rateLimiter.RateLimit("public_access", middleware.KeyByIP), agentHandler.RenewToken)
-
// Agent setup routes (no authentication required, with rate limiting)
- api.POST("/setup/agent", rateLimiter.RateLimit("agent_setup", middleware.KeyByIP), handlers.SetupAgent)
- api.GET("/setup/templates", rateLimiter.RateLimit("public_access", middleware.KeyByIP), handlers.GetTemplates)
- api.POST("/setup/validate", rateLimiter.RateLimit("agent_setup", middleware.KeyByIP), handlers.ValidateConfiguration)
+ api.POST("/setup/agent", rateLimiter.RateLimit("agent_setup", middleware.KeyByIP), agentSetupHandler.SetupAgent)
+ api.GET("/setup/templates", rateLimiter.RateLimit("public_access", middleware.KeyByIP), agentSetupHandler.GetTemplates)
+ api.POST("/setup/validate", rateLimiter.RateLimit("agent_setup", middleware.KeyByIP), agentSetupHandler.ValidateConfiguration)
// Build orchestrator routes (admin-only)
buildRoutes := api.Group("/build")
buildRoutes.Use(authHandler.WebAuthMiddleware())
{
- buildRoutes.POST("/new", rateLimiter.RateLimit("agent_build", middleware.KeyByIP), handlers.NewAgentBuild)
- buildRoutes.POST("/upgrade/:agentID", rateLimiter.RateLimit("agent_build", middleware.KeyByIP), handlers.UpgradeAgentBuild)
- buildRoutes.POST("/detect", rateLimiter.RateLimit("agent_build", middleware.KeyByIP), handlers.DetectAgentInstallation)
+ buildRoutes.POST("/new", rateLimiter.RateLimit("agent_build", middleware.KeyByAgentID), handlers.NewAgentBuild)
+ buildRoutes.POST("/upgrade/:agentID", rateLimiter.RateLimit("agent_build", middleware.KeyByAgentID), handlers.UpgradeAgentBuild)
+ buildRoutes.POST("/detect", rateLimiter.RateLimit("agent_build", middleware.KeyByAgentID), handlers.DetectAgentInstallation)
}
// Public download routes (no authentication - agents need these!)
api.GET("/downloads/:platform", rateLimiter.RateLimit("public_access", middleware.KeyByIP), downloadHandler.DownloadAgent)
api.GET("/downloads/updates/:package_id", rateLimiter.RateLimit("public_access", middleware.KeyByIP), downloadHandler.DownloadUpdatePackage)
+ api.GET("/downloads/config/:agent_id", rateLimiter.RateLimit("public_access", middleware.KeyByIP), downloadHandler.HandleConfigDownload)
api.GET("/install/:platform", rateLimiter.RateLimit("public_access", middleware.KeyByIP), downloadHandler.InstallScript)
-
- // Protected agent routes (with machine binding security)
- agents := api.Group("/agents")
- agents.Use(middleware.AuthMiddleware())
- agents.Use(middleware.MachineBindingMiddleware(agentQueries, cfg.MinAgentVersion)) // v0.1.22: Prevent config copying
- {
- agents.GET("/:id/commands", agentHandler.GetCommands)
- agents.GET("/:id/config", agentHandler.GetAgentConfig)
- agents.POST("/:id/updates", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), updateHandler.ReportUpdates)
- agents.POST("/:id/logs", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), updateHandler.ReportLog)
- agents.POST("/:id/dependencies", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), updateHandler.ReportDependencies)
- agents.POST("/:id/system-info", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), agentHandler.ReportSystemInfo)
- agents.POST("/:id/rapid-mode", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), agentHandler.SetRapidPollingMode)
- agents.POST("/:id/verify-signature", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), func(c *gin.Context) {
- if verificationHandler == nil {
- c.JSON(http.StatusServiceUnavailable, gin.H{"error": "signature verification service not available"})
- return
- }
- verificationHandler.VerifySignature(c)
- })
- agents.DELETE("/:id", agentHandler.UnregisterAgent)
-
- // New dedicated endpoints for metrics and docker images (data classification fix)
- agents.POST("/:id/metrics", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), metricsHandler.ReportMetrics)
- agents.POST("/:id/docker-images", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), dockerReportsHandler.ReportDockerImages)
- }
-
- // Dashboard/Web routes (protected by web auth)
- dashboard := api.Group("/")
- dashboard.Use(authHandler.WebAuthMiddleware())
- {
- dashboard.GET("/stats/summary", statsHandler.GetDashboardStats)
- dashboard.GET("/agents", agentHandler.ListAgents)
- dashboard.GET("/agents/:id", agentHandler.GetAgent)
- dashboard.POST("/agents/:id/scan", agentHandler.TriggerScan)
- dashboard.POST("/agents/:id/heartbeat", agentHandler.TriggerHeartbeat)
- dashboard.GET("/agents/:id/heartbeat", agentHandler.GetHeartbeatStatus)
- dashboard.POST("/agents/:id/reboot", agentHandler.TriggerReboot)
-
- // Subsystem routes for web dashboard
- dashboard.GET("/agents/:id/subsystems", subsystemHandler.GetSubsystems)
- dashboard.GET("/agents/:id/subsystems/:subsystem", subsystemHandler.GetSubsystem)
- dashboard.PATCH("/agents/:id/subsystems/:subsystem", subsystemHandler.UpdateSubsystem)
- dashboard.POST("/agents/:id/subsystems/:subsystem/enable", subsystemHandler.EnableSubsystem)
- dashboard.POST("/agents/:id/subsystems/:subsystem/disable", subsystemHandler.DisableSubsystem)
- dashboard.POST("/agents/:id/subsystems/:subsystem/trigger", subsystemHandler.TriggerSubsystem)
- dashboard.GET("/agents/:id/subsystems/:subsystem/stats", subsystemHandler.GetSubsystemStats)
- dashboard.POST("/agents/:id/subsystems/:subsystem/auto-run", subsystemHandler.SetAutoRun)
- dashboard.POST("/agents/:id/subsystems/:subsystem/interval", subsystemHandler.SetInterval)
-
- dashboard.GET("/updates", updateHandler.ListUpdates)
- dashboard.GET("/updates/:id", updateHandler.GetUpdate)
- dashboard.GET("/updates/:id/logs", updateHandler.GetUpdateLogs)
- dashboard.POST("/updates/:id/approve", updateHandler.ApproveUpdate)
- dashboard.POST("/updates/approve", updateHandler.ApproveUpdates)
- dashboard.POST("/updates/:id/reject", updateHandler.RejectUpdate)
- dashboard.POST("/updates/:id/install", updateHandler.InstallUpdate)
- dashboard.POST("/updates/:id/confirm-dependencies", updateHandler.ConfirmDependencies)
-
- // Agent update routes
- if agentUpdateHandler != nil {
- dashboard.POST("/agents/:id/update", agentUpdateHandler.UpdateAgent)
- dashboard.POST("/agents/:id/update-nonce", agentUpdateHandler.GenerateUpdateNonce)
- dashboard.POST("/agents/bulk-update", agentUpdateHandler.BulkUpdateAgents)
- dashboard.GET("/updates/packages", agentUpdateHandler.ListUpdatePackages)
- dashboard.POST("/updates/packages/sign", agentUpdateHandler.SignUpdatePackage)
- dashboard.GET("/agents/:id/updates/available", agentUpdateHandler.CheckForUpdateAvailable)
- dashboard.GET("/agents/:id/updates/status", agentUpdateHandler.GetUpdateStatus)
- }
-
- // Log routes
- dashboard.GET("/logs", updateHandler.GetAllLogs)
- dashboard.GET("/logs/active", updateHandler.GetActiveOperations)
-
- // Command routes
- dashboard.GET("/commands/active", updateHandler.GetActiveCommands)
- dashboard.GET("/commands/recent", updateHandler.GetRecentCommands)
- dashboard.POST("/commands/:id/retry", updateHandler.RetryCommand)
- dashboard.POST("/commands/:id/cancel", updateHandler.CancelCommand)
- dashboard.DELETE("/commands/failed", updateHandler.ClearFailedCommands)
-
- // Settings routes
- dashboard.GET("/settings/timezone", settingsHandler.GetTimezone)
- dashboard.GET("/settings/timezones", settingsHandler.GetTimezones)
- dashboard.PUT("/settings/timezone", settingsHandler.UpdateTimezone)
-
- // Docker routes
- dashboard.GET("/docker/containers", dockerHandler.GetContainers)
- dashboard.GET("/docker/stats", dockerHandler.GetStats)
- dashboard.POST("/docker/containers/:container_id/images/:image_id/approve", dockerHandler.ApproveUpdate)
- dashboard.POST("/docker/containers/:container_id/images/:image_id/reject", dockerHandler.RejectUpdate)
- dashboard.POST("/docker/containers/:container_id/images/:image_id/install", dockerHandler.InstallUpdate)
-
- // Metrics and Docker images routes (data classification fix)
- dashboard.GET("/agents/:id/metrics", metricsHandler.GetAgentMetrics)
- dashboard.GET("/agents/:id/metrics/storage", metricsHandler.GetAgentStorageMetrics)
- dashboard.GET("/agents/:id/metrics/system", metricsHandler.GetAgentSystemMetrics)
- dashboard.GET("/agents/:id/docker-images", dockerReportsHandler.GetAgentDockerImages)
- dashboard.GET("/agents/:id/docker-info", dockerReportsHandler.GetAgentDockerInfo)
-
- // Admin/Registration Token routes (for agent enrollment management)
- admin := dashboard.Group("/admin")
- {
- admin.POST("/registration-tokens", rateLimiter.RateLimit("admin_token_gen", middleware.KeyByUserID), registrationTokenHandler.GenerateRegistrationToken)
- admin.GET("/registration-tokens", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.ListRegistrationTokens)
- admin.GET("/registration-tokens/active", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.GetActiveRegistrationTokens)
- admin.DELETE("/registration-tokens/:token", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.RevokeRegistrationToken)
- admin.DELETE("/registration-tokens/delete/:id", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.DeleteRegistrationToken)
- admin.POST("/registration-tokens/cleanup", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.CleanupExpiredTokens)
- admin.GET("/registration-tokens/stats", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.GetTokenStats)
- admin.GET("/registration-tokens/validate", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.ValidateRegistrationToken)
-
- // Rate Limit Management
- admin.GET("/rate-limits", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.GetRateLimitSettings)
- admin.PUT("/rate-limits", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.UpdateRateLimitSettings)
- admin.POST("/rate-limits/reset", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.ResetRateLimitSettings)
- admin.GET("/rate-limits/stats", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.GetRateLimitStats)
- admin.POST("/rate-limits/cleanup", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.CleanupRateLimitEntries)
- }
-
- // Security Health Check endpoints
- dashboard.GET("/security/overview", securityHandler.SecurityOverview)
- dashboard.GET("/security/signing", securityHandler.SigningStatus)
- dashboard.GET("/security/nonce", securityHandler.NonceValidationStatus)
- dashboard.GET("/security/commands", securityHandler.CommandValidationStatus)
- dashboard.GET("/security/machine-binding", securityHandler.MachineBindingStatus)
- dashboard.GET("/security/metrics", securityHandler.SecurityMetrics)
- }
}
// Start background goroutine to mark offline agents
@@ -452,6 +420,167 @@ func main() {
schedulerConfig := scheduler.DefaultConfig()
subsystemScheduler := scheduler.NewScheduler(schedulerConfig, agentQueries, commandQueries, subsystemQueries)
+ // Initialize agentHandler now that scheduler is available
+ agentHandler := handlers.NewAgentHandler(agentQueries, commandQueries, refreshTokenQueries, registrationTokenQueries, subsystemQueries, subsystemScheduler, signingService, securityLogger, cfg.CheckInInterval, cfg.LatestAgentVersion)
+
+ // Initialize agent update handler now that agentHandler is available
+ var agentUpdateHandler *handlers.AgentUpdateHandler
+ if signingService != nil {
+ agentUpdateHandler = handlers.NewAgentUpdateHandler(agentQueries, agentUpdateQueries, commandQueries, signingService, updateNonceService, agentHandler)
+ }
+
+ // Initialize updateHandler with the agentHandler reference
+ updateHandler := handlers.NewUpdateHandler(updateQueries, agentQueries, commandQueries, agentHandler)
+
+ // Add routes that depend on agentHandler (must be after agentHandler creation)
+ api.POST("/agents/register", rateLimiter.RateLimit("agent_registration", middleware.KeyByIP), agentHandler.RegisterAgent)
+ api.POST("/agents/renew", rateLimiter.RateLimit("public_access", middleware.KeyByIP), agentHandler.RenewToken)
+
+ // Protected agent routes (with machine binding security)
+ agents := api.Group("/agents")
+ agents.Use(middleware.AuthMiddleware())
+ agents.Use(middleware.MachineBindingMiddleware(agentQueries, cfg.MinAgentVersion)) // v0.1.22: Prevent config copying
+ {
+ agents.GET("/:id/commands", agentHandler.GetCommands)
+ agents.GET("/:id/config", agentHandler.GetAgentConfig)
+ agents.POST("/:id/updates", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), updateHandler.ReportUpdates)
+ agents.POST("/:id/logs", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), updateHandler.ReportLog)
+ agents.POST("/:id/dependencies", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), updateHandler.ReportDependencies)
+ agents.POST("/:id/system-info", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), agentHandler.ReportSystemInfo)
+ agents.POST("/:id/rapid-mode", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), agentHandler.SetRapidPollingMode)
+ agents.POST("/:id/verify-signature", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), func(c *gin.Context) {
+ if verificationHandler == nil {
+ c.JSON(http.StatusServiceUnavailable, gin.H{"error": "signature verification service not available"})
+ return
+ }
+ verificationHandler.VerifySignature(c)
+ })
+ agents.DELETE("/:id", agentHandler.UnregisterAgent)
+
+ // New dedicated endpoints for metrics and docker images (data classification fix)
+ agents.POST("/:id/metrics", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), metricsHandler.ReportMetrics)
+ agents.POST("/:id/docker-images", rateLimiter.RateLimit("agent_reports", middleware.KeyByAgentID), dockerReportsHandler.ReportDockerImages)
+ }
+
+ // Dashboard/Web routes (protected by web auth)
+ dashboard := api.Group("/")
+ dashboard.Use(authHandler.WebAuthMiddleware())
+ {
+ dashboard.GET("/stats/summary", statsHandler.GetDashboardStats)
+ dashboard.GET("/agents", agentHandler.ListAgents)
+ dashboard.GET("/agents/:id", agentHandler.GetAgent)
+ dashboard.POST("/agents/:id/scan", agentHandler.TriggerScan)
+ dashboard.POST("/agents/:id/heartbeat", agentHandler.TriggerHeartbeat)
+ dashboard.GET("/agents/:id/heartbeat", agentHandler.GetHeartbeatStatus)
+ dashboard.POST("/agents/:id/reboot", agentHandler.TriggerReboot)
+
+ // Subsystem routes for web dashboard
+ dashboard.GET("/agents/:id/subsystems", subsystemHandler.GetSubsystems)
+ dashboard.GET("/agents/:id/subsystems/:subsystem", subsystemHandler.GetSubsystem)
+ dashboard.PATCH("/agents/:id/subsystems/:subsystem", subsystemHandler.UpdateSubsystem)
+ dashboard.POST("/agents/:id/subsystems/:subsystem/enable", subsystemHandler.EnableSubsystem)
+ dashboard.POST("/agents/:id/subsystems/:subsystem/disable", subsystemHandler.DisableSubsystem)
+ dashboard.POST("/agents/:id/subsystems/:subsystem/trigger", subsystemHandler.TriggerSubsystem)
+ dashboard.GET("/agents/:id/subsystems/:subsystem/stats", subsystemHandler.GetSubsystemStats)
+ dashboard.POST("/agents/:id/subsystems/:subsystem/auto-run", subsystemHandler.SetAutoRun)
+ dashboard.POST("/agents/:id/subsystems/:subsystem/interval", subsystemHandler.SetInterval)
+
+ dashboard.GET("/updates", updateHandler.ListUpdates)
+ dashboard.GET("/updates/:id", updateHandler.GetUpdate)
+ dashboard.GET("/updates/:id/logs", updateHandler.GetUpdateLogs)
+ dashboard.POST("/updates/:id/approve", updateHandler.ApproveUpdate)
+ dashboard.POST("/updates/approve", updateHandler.ApproveUpdates)
+ dashboard.POST("/updates/:id/reject", updateHandler.RejectUpdate)
+ dashboard.POST("/updates/:id/install", updateHandler.InstallUpdate)
+ dashboard.POST("/updates/:id/confirm-dependencies", updateHandler.ConfirmDependencies)
+
+ // Agent update routes
+ if agentUpdateHandler != nil {
+ dashboard.POST("/agents/:id/update", agentUpdateHandler.UpdateAgent)
+ dashboard.POST("/agents/:id/update-nonce", agentUpdateHandler.GenerateUpdateNonce)
+ dashboard.POST("/agents/bulk-update", agentUpdateHandler.BulkUpdateAgents)
+ dashboard.GET("/updates/packages", agentUpdateHandler.ListUpdatePackages)
+ dashboard.POST("/updates/packages/sign", agentUpdateHandler.SignUpdatePackage)
+ dashboard.GET("/agents/:id/updates/available", agentUpdateHandler.CheckForUpdateAvailable)
+ dashboard.GET("/agents/:id/updates/status", agentUpdateHandler.GetUpdateStatus)
+ }
+
+ dashboard.GET("/logs", updateHandler.GetAllLogs)
+ dashboard.GET("/logs/active", updateHandler.GetActiveOperations)
+
+ // Command routes
+ dashboard.GET("/commands/active", updateHandler.GetActiveCommands)
+ dashboard.GET("/commands/recent", updateHandler.GetRecentCommands)
+ dashboard.POST("/commands/:id/retry", updateHandler.RetryCommand)
+ dashboard.POST("/commands/:id/cancel", updateHandler.CancelCommand)
+ dashboard.DELETE("/commands/failed", updateHandler.ClearFailedCommands)
+
+ // Settings routes
+ dashboard.GET("/settings/timezone", settingsHandler.GetTimezone)
+ dashboard.GET("/settings/timezones", settingsHandler.GetTimezones)
+ dashboard.PUT("/settings/timezone", settingsHandler.UpdateTimezone)
+
+ // Docker routes
+ dashboard.GET("/docker/containers", dockerHandler.GetContainers)
+ dashboard.GET("/docker/stats", dockerHandler.GetStats)
+ dashboard.POST("/docker/containers/:container_id/images/:image_id/approve", dockerHandler.ApproveUpdate)
+ dashboard.POST("/docker/containers/:container_id/images/:image_id/reject", dockerHandler.RejectUpdate)
+ dashboard.POST("/docker/containers/:container_id/images/:image_id/install", dockerHandler.InstallUpdate)
+
+ // Metrics and Docker images routes (data classification fix)
+ dashboard.GET("/agents/:id/metrics", metricsHandler.GetAgentMetrics)
+ dashboard.GET("/agents/:id/metrics/storage", metricsHandler.GetAgentStorageMetrics)
+ dashboard.GET("/agents/:id/metrics/system", metricsHandler.GetAgentSystemMetrics)
+ dashboard.GET("/agents/:id/docker-images", dockerReportsHandler.GetAgentDockerImages)
+ dashboard.GET("/agents/:id/docker-info", dockerReportsHandler.GetAgentDockerInfo)
+
+ // Admin/Registration Token routes (for agent enrollment management)
+ admin := dashboard.Group("/admin")
+ {
+ admin.POST("/registration-tokens", rateLimiter.RateLimit("admin_token_gen", middleware.KeyByUserID), registrationTokenHandler.GenerateRegistrationToken)
+ admin.GET("/registration-tokens", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.ListRegistrationTokens)
+ admin.GET("/registration-tokens/active", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.GetActiveRegistrationTokens)
+ admin.DELETE("/registration-tokens/:token", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.RevokeRegistrationToken)
+ admin.DELETE("/registration-tokens/delete/:id", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.DeleteRegistrationToken)
+ admin.POST("/registration-tokens/cleanup", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.CleanupExpiredTokens)
+ admin.GET("/registration-tokens/stats", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.GetTokenStats)
+ admin.GET("/registration-tokens/validate", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), registrationTokenHandler.ValidateRegistrationToken)
+
+ // Rate Limit Management
+ admin.GET("/rate-limits", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.GetRateLimitSettings)
+ admin.PUT("/rate-limits", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.UpdateRateLimitSettings)
+ admin.POST("/rate-limits/reset", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.ResetRateLimitSettings)
+ admin.GET("/rate-limits/stats", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.GetRateLimitStats)
+ admin.POST("/rate-limits/cleanup", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), rateLimitHandler.CleanupRateLimitEntries)
+
+ // Scanner Configuration (user-configurable timeouts)
+ admin.GET("/scanner-timeouts", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), scannerConfigHandler.GetScannerTimeouts)
+ admin.PUT("/scanner-timeouts/:scanner_name", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), scannerConfigHandler.UpdateScannerTimeout)
+ admin.POST("/scanner-timeouts/:scanner_name/reset", rateLimiter.RateLimit("admin_operations", middleware.KeyByUserID), scannerConfigHandler.ResetScannerTimeout)
+ }
+
+ // Security Health Check endpoints
+ dashboard.GET("/security/overview", securityHandler.SecurityOverview)
+ dashboard.GET("/security/signing", securityHandler.SigningStatus)
+ dashboard.GET("/security/nonce", securityHandler.NonceValidationStatus)
+ dashboard.GET("/security/commands", securityHandler.CommandValidationStatus)
+ dashboard.GET("/security/machine-binding", securityHandler.MachineBindingStatus)
+ dashboard.GET("/security/metrics", securityHandler.SecurityMetrics)
+
+ // Security Settings Management endpoints (admin-only)
+// securitySettings := dashboard.Group("/security/settings")
+// securitySettings.Use(middleware.RequireAdmin())
+// {
+// securitySettings.GET("", securitySettingsHandler.GetAllSecuritySettings)
+// securitySettings.GET("/audit", securitySettingsHandler.GetSecurityAuditTrail)
+// securitySettings.GET("/overview", securitySettingsHandler.GetSecurityOverview)
+// securitySettings.GET("/:category", securitySettingsHandler.GetSecuritySettingsByCategory)
+// securitySettings.PUT("/:category/:key", securitySettingsHandler.UpdateSecuritySetting)
+// securitySettings.POST("/validate", securitySettingsHandler.ValidateSecuritySettings)
+// securitySettings.POST("/apply", securitySettingsHandler.ApplySecuritySettings)
+// }
+ }
+
// Load subsystems into queue
ctx := context.Background()
if err := subsystemScheduler.LoadSubsystems(ctx); err != nil {
diff --git a/aggregator-server/go.mod b/aggregator-server/go.mod
index 801b54c..c459999 100644
--- a/aggregator-server/go.mod
+++ b/aggregator-server/go.mod
@@ -1,48 +1,71 @@
module github.com/Fimeg/RedFlag/aggregator-server
-go 1.23.0
+go 1.24.0
require (
+ github.com/docker/docker v25.0.6+incompatible
github.com/gin-gonic/gin v1.11.0
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/google/uuid v1.6.0
github.com/jmoiron/sqlx v1.4.0
github.com/lib/pq v1.10.9
- golang.org/x/crypto v0.40.0
+ golang.org/x/crypto v0.44.0
+ gopkg.in/natefinch/lumberjack.v2 v2.2.1
)
require (
- github.com/Fimeg/RedFlag/aggregator v0.0.0
+ github.com/Microsoft/go-winio v0.6.2 // indirect
+ github.com/alexedwards/argon2id v1.0.0 // indirect
github.com/bytedance/sonic v1.14.0 // indirect
github.com/bytedance/sonic/loader v0.3.0 // indirect
+ github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudwego/base64x v0.1.6 // indirect
+ github.com/containerd/log v0.1.0 // indirect
+ github.com/distribution/reference v0.6.0 // indirect
+ github.com/docker/go-connections v0.4.0 // indirect
+ github.com/docker/go-units v0.5.0 // indirect
+ github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
+ github.com/go-logr/logr v1.4.3 // indirect
+ github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/goccy/go-yaml v1.18.0 // indirect
+ github.com/gogo/protobuf v1.3.2 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
+ github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
+ github.com/morikuni/aec v1.1.0 // indirect
+ github.com/opencontainers/go-digest v1.0.0 // indirect
+ github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
+ github.com/pkg/errors v0.9.1 // indirect
github.com/quic-go/qpack v0.5.1 // indirect
github.com/quic-go/quic-go v0.54.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
+ go.opentelemetry.io/auto/sdk v1.2.1 // indirect
+ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 // indirect
+ go.opentelemetry.io/otel v1.39.0 // indirect
+ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0 // indirect
+ go.opentelemetry.io/otel/metric v1.39.0 // indirect
+ go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.uber.org/mock v0.5.0 // indirect
golang.org/x/arch v0.20.0 // indirect
- golang.org/x/mod v0.25.0 // indirect
- golang.org/x/net v0.42.0 // indirect
- golang.org/x/sync v0.16.0 // indirect
- golang.org/x/sys v0.35.0 // indirect
- golang.org/x/text v0.27.0 // indirect
- golang.org/x/tools v0.34.0 // indirect
- google.golang.org/protobuf v1.36.9 // indirect
+ golang.org/x/mod v0.29.0 // indirect
+ golang.org/x/net v0.47.0 // indirect
+ golang.org/x/sync v0.18.0 // indirect
+ golang.org/x/sys v0.39.0 // indirect
+ golang.org/x/text v0.31.0 // indirect
+ golang.org/x/time v0.14.0 // indirect
+ golang.org/x/tools v0.38.0 // indirect
+ google.golang.org/protobuf v1.36.10 // indirect
+ gotest.tools/v3 v3.5.2 // indirect
)
-
-replace github.com/Fimeg/RedFlag/aggregator => ../aggregator
diff --git a/aggregator-server/go.sum b/aggregator-server/go.sum
index 0cdc5c4..ccaf204 100644
--- a/aggregator-server/go.sum
+++ b/aggregator-server/go.sum
@@ -1,20 +1,47 @@
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
+github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
+github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
+github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
+github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
+github.com/alexedwards/argon2id v1.0.0 h1:wJzDx66hqWX7siL/SRUmgz3F8YMrd/nfX/xHHcQQP0w=
+github.com/alexedwards/argon2id v1.0.0/go.mod h1:tYKkqIjzXvZdzPvADMWOEZ+l6+BD6CtBXMj5fnJppiw=
github.com/bytedance/sonic v1.14.0 h1:/OfKt8HFw0kh2rj8N0F6C/qPGRESq0BbaNZgcNXXzQQ=
github.com/bytedance/sonic v1.14.0/go.mod h1:WoEbx8WTcFJfzCe0hbmyTGrfjt8PzNEBdxlNUO24NhA=
github.com/bytedance/sonic/loader v0.3.0 h1:dskwH8edlzNMctoruo8FPTJDF3vLtDT0sXZwvZJyqeA=
github.com/bytedance/sonic/loader v0.3.0/go.mod h1:N8A3vUdtUebEY2/VQC0MyhYeKUFosQU6FxH2JmUe6VI=
+github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
+github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
+github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
+github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudwego/base64x v0.1.6 h1:t11wG9AECkCDk5fMSoxmufanudBtJ+/HemLstXDLI2M=
github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gEHfghB2IPU=
+github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
+github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
+github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
+github.com/docker/docker v25.0.6+incompatible h1:5cPwbwriIcsua2REJe8HqQV+6WlWc1byg2QSXzBxBGg=
+github.com/docker/docker v25.0.6+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
+github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
+github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
+github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
+github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
github.com/gabriel-vasile/mimetype v1.4.8/go.mod h1:ByKUIKGjh1ODkGM1asKUbQZOLGrPjydw3hYPU2YU9t8=
github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w=
github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM=
github.com/gin-gonic/gin v1.11.0 h1:OW/6PLjyusp2PPXtyxKHU0RbX6I/l28FTdDlae5ueWk=
github.com/gin-gonic/gin v1.11.0/go.mod h1:+iq/FyxlGzII0KHiBGjuNn4UNENUlKbGlNmc+W50Dls=
+github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
+github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
+github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
+github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
@@ -29,6 +56,8 @@ github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw=
github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
+github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
+github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
@@ -36,10 +65,14 @@ github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
+github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
@@ -50,18 +83,30 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
+github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
+github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 h1:ZqeYNhU3OHLH3mGKHDcjJRFFRrJa6eAM5H+CtDdOsPc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
+github.com/morikuni/aec v1.1.0 h1:vBBl0pUnvi/Je71dsRrhMBtreIqNMYErSAbEeb8jrXQ=
+github.com/morikuni/aec v1.1.0/go.mod h1:xDRgiq/iw5l+zkao76YTKzKttOp2cwPEne25HDkJnBw=
+github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
+github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
+github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
+github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/quic-go/qpack v0.5.1 h1:giqksBPnT/HDtZ6VhtFKgoLOWmlyo9Ei6u9PqzIMbhI=
github.com/quic-go/qpack v0.5.1/go.mod h1:+PC4XFrEskIVkcLzpEkbLqq1uCoxPhQuvK5rH1ZgaEg=
github.com/quic-go/quic-go v0.54.0 h1:6s1YB9QotYI6Ospeiguknbp2Znb/jZYjZLRXn9kMQBg=
github.com/quic-go/quic-go v0.54.0/go.mod h1:e68ZEaCdyviluZmy44P6Iey98v/Wfz6HCjQEm+l8zTY=
+github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
+github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
@@ -75,28 +120,116 @@ github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.3.0 h1:Qd2W2sQawAfG8XSvzwhBeoGq71zXOC/Q1E9y/wUcsUA=
github.com/ugorji/go/codec v1.3.0/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
+go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
+go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 h1:ssfIgGNANqpVFCndZvcuyKbl0g+UAVcbBcqGkG28H0Y=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0/go.mod h1:GQ/474YrbE4Jx8gZ4q5I4hrhUzM6UPzyrqJYV2AqPoQ=
+go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
+go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0 h1:f0cb2XPmrqn4XMy9PNliTgRKJgS5WcL/u0/WRYGz4t0=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0/go.mod h1:vnakAaFckOMiMtOIhFI2MNH4FYrZzXCYxmb1LlhoGz8=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0 h1:Ckwye2FpXkYgiHX7fyVrN1uA/UYd9ounqqTuSNAv0k4=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0/go.mod h1:teIFJh5pW2y+AN7riv6IBPX2DuesS3HgP39mwOspKwU=
+go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
+go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
+go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
+go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
+go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
+go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
+go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
+go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
+go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
+go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU=
go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM=
golang.org/x/arch v0.20.0 h1:dx1zTU0MAE98U+TQ8BLl7XsJbgze2WnNKF/8tGp/Q6c=
golang.org/x/arch v0.20.0/go.mod h1:bdwinDaKcfZUGpH09BB7ZmOfhalA8lQdzl62l8gGWsk=
-golang.org/x/crypto v0.40.0 h1:r4x+VvoG5Fm+eJcxMaY8CQM7Lb0l1lsmjGBQ6s8BfKM=
-golang.org/x/crypto v0.40.0/go.mod h1:Qr1vMER5WyS2dfPHAlsOj01wgLbsyWtFn/aY+5+ZdxY=
-golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
-golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
-golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
-golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
-golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
-golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
+golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
+golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
+golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
+golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
-golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
-golang.org/x/text v0.27.0 h1:4fGWRpyh641NLlecmyl4LOe6yDdfaYNrGb2zdfo4JV4=
-golang.org/x/text v0.27.0/go.mod h1:1D28KMCvyooCX9hBiosv5Tz/+YLxj0j7XhWjpSUF7CU=
-golang.org/x/tools v0.34.0 h1:qIpSLOxeCYGg9TrcJokLBG4KFA6d795g0xkBkiESGlo=
-golang.org/x/tools v0.34.0/go.mod h1:pAP9OwEaY1CAW3HOmg3hLZC5Z0CCmzjAF2UQMSqNARg=
-google.golang.org/protobuf v1.36.9 h1:w2gp2mA27hUeUzj9Ex9FBjsBm40zfaDtEWow293U7Iw=
-google.golang.org/protobuf v1.36.9/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
+golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
+golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
+golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
+golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
+golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
+golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
+golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
+golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
+golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
+golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
+google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
+google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
+google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
+google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
+google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
+gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
+gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
diff --git a/aggregator-server/internal/api/handlers/agent_build.go b/aggregator-server/internal/api/handlers/agent_build.go
index bd01f23..15c6c68 100644
--- a/aggregator-server/internal/api/handlers/agent_build.go
+++ b/aggregator-server/internal/api/handlers/agent_build.go
@@ -5,21 +5,34 @@ import (
"os"
"path/filepath"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
"github.com/Fimeg/RedFlag/aggregator-server/internal/services"
"github.com/gin-gonic/gin"
)
+// AgentBuildHandler handles agent build operations
+type AgentBuildHandler struct {
+ agentQueries *queries.AgentQueries
+}
+
+// NewAgentBuildHandler creates a new agent build handler
+func NewAgentBuildHandler(agentQueries *queries.AgentQueries) *AgentBuildHandler {
+ return &AgentBuildHandler{
+ agentQueries: agentQueries,
+ }
+}
+
// BuildAgent handles the agent build endpoint
// Deprecated: Use AgentHandler.Rebuild instead
-func BuildAgent(c *gin.Context) {
+func (h *AgentBuildHandler) BuildAgent(c *gin.Context) {
var req services.AgentSetupRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
- // Create config builder
- configBuilder := services.NewConfigBuilder(req.ServerURL)
+ // Create config builder with database access
+ configBuilder := services.NewConfigBuilder(req.ServerURL, h.agentQueries.DB)
// Build agent configuration
config, err := configBuilder.BuildAgentConfig(req)
@@ -62,7 +75,7 @@ func BuildAgent(c *gin.Context) {
}
// GetBuildInstructions returns build instructions for manual setup
-func GetBuildInstructions(c *gin.Context) {
+func (h *AgentBuildHandler) GetBuildInstructions(c *gin.Context) {
agentID := c.Param("agentID")
if agentID == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "agent ID is required"})
@@ -70,7 +83,7 @@ func GetBuildInstructions(c *gin.Context) {
}
instructions := gin.H{
- "title": "RedFlag Agent Build Instructions",
+ "title": "RedFlag Agent Build Instructions",
"agent_id": agentID,
"steps": []gin.H{
{
@@ -139,7 +152,7 @@ func GetBuildInstructions(c *gin.Context) {
}
// DownloadBuildArtifacts provides download links for generated files
-func DownloadBuildArtifacts(c *gin.Context) {
+func (h *AgentBuildHandler) DownloadBuildArtifacts(c *gin.Context) {
agentID := c.Param("agentID")
fileType := c.Param("fileType")
buildDir := c.Query("buildDir")
@@ -184,4 +197,4 @@ func DownloadBuildArtifacts(c *gin.Context) {
// Serve file for download
c.FileAttachment(filePath, filepath.Base(filePath))
-}
\ No newline at end of file
+}
diff --git a/aggregator-server/internal/api/handlers/agent_events.go b/aggregator-server/internal/api/handlers/agent_events.go
new file mode 100644
index 0000000..0e97d83
--- /dev/null
+++ b/aggregator-server/internal/api/handlers/agent_events.go
@@ -0,0 +1,54 @@
+package handlers
+
+import (
+ "log"
+ "net/http"
+ "strconv"
+
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
+ "github.com/gin-gonic/gin"
+ "github.com/google/uuid"
+)
+
+type AgentEventsHandler struct {
+ agentQueries *queries.AgentQueries
+}
+
+func NewAgentEventsHandler(aq *queries.AgentQueries) *AgentEventsHandler {
+ return &AgentEventsHandler{agentQueries: aq}
+}
+
+// GetAgentEvents returns system events for an agent with optional filtering
+// GET /api/v1/agents/:id/events?severity=error,critical,warning&limit=50
+func (h *AgentEventsHandler) GetAgentEvents(c *gin.Context) {
+ agentIDStr := c.Param("id")
+ agentID, err := uuid.Parse(agentIDStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, gin.H{"error": "invalid agent ID"})
+ return
+ }
+
+ // Optional query parameters
+ severity := c.Query("severity") // comma-separated filter: error,critical,warning,info
+ limitStr := c.DefaultQuery("limit", "50")
+ limit, err := strconv.Atoi(limitStr)
+ if err != nil || limit < 1 {
+ limit = 50
+ }
+ if limit > 1000 {
+ limit = 1000 // Cap at 1000 to prevent excessive queries
+ }
+
+ // Get events using the agent queries
+ events, err := h.agentQueries.GetAgentEvents(agentID, severity, limit)
+ if err != nil {
+ log.Printf("ERROR: Failed to fetch agent events: %v", err)
+ c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to fetch events"})
+ return
+ }
+
+ c.JSON(http.StatusOK, gin.H{
+ "events": events,
+ "total": len(events),
+ })
+}
\ No newline at end of file
diff --git a/aggregator-server/internal/api/handlers/agent_setup.go b/aggregator-server/internal/api/handlers/agent_setup.go
index deb1576..8bc445b 100644
--- a/aggregator-server/internal/api/handlers/agent_setup.go
+++ b/aggregator-server/internal/api/handlers/agent_setup.go
@@ -3,21 +3,33 @@ package handlers
import (
"net/http"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
"github.com/Fimeg/RedFlag/aggregator-server/internal/services"
"github.com/gin-gonic/gin"
)
+// AgentSetupHandler handles agent setup operations
+type AgentSetupHandler struct {
+ agentQueries *queries.AgentQueries
+}
+
+// NewAgentSetupHandler creates a new agent setup handler
+func NewAgentSetupHandler(agentQueries *queries.AgentQueries) *AgentSetupHandler {
+ return &AgentSetupHandler{
+ agentQueries: agentQueries,
+ }
+}
+
// SetupAgent handles the agent setup endpoint
-// Deprecated: Use AgentHandler.Setup instead
-func SetupAgent(c *gin.Context) {
+func (h *AgentSetupHandler) SetupAgent(c *gin.Context) {
var req services.AgentSetupRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
- // Create config builder
- configBuilder := services.NewConfigBuilder(req.ServerURL)
+ // Create config builder with database access
+ configBuilder := services.NewConfigBuilder(req.ServerURL, h.agentQueries.DB)
// Build agent configuration
config, err := configBuilder.BuildAgentConfig(req)
@@ -43,14 +55,14 @@ func SetupAgent(c *gin.Context) {
}
// GetTemplates returns available agent templates
-func GetTemplates(c *gin.Context) {
- configBuilder := services.NewConfigBuilder("")
+func (h *AgentSetupHandler) GetTemplates(c *gin.Context) {
+ configBuilder := services.NewConfigBuilder("", h.agentQueries.DB)
templates := configBuilder.GetTemplates()
c.JSON(http.StatusOK, gin.H{"templates": templates})
}
// ValidateConfiguration validates a configuration before deployment
-func ValidateConfiguration(c *gin.Context) {
+func (h *AgentSetupHandler) ValidateConfiguration(c *gin.Context) {
var config map[string]interface{}
if err := c.ShouldBindJSON(&config); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
@@ -63,7 +75,7 @@ func ValidateConfiguration(c *gin.Context) {
return
}
- configBuilder := services.NewConfigBuilder("")
+ configBuilder := services.NewConfigBuilder("", h.agentQueries.DB)
template, exists := configBuilder.GetTemplate(agentType)
if !exists {
c.JSON(http.StatusBadRequest, gin.H{"error": "Unknown agent type"})
@@ -77,4 +89,4 @@ func ValidateConfiguration(c *gin.Context) {
"agent_type": agentType,
"template": template.Name,
})
-}
\ No newline at end of file
+}
diff --git a/aggregator-server/internal/api/handlers/agent_updates.go b/aggregator-server/internal/api/handlers/agent_updates.go
index be2b70c..2ffc06c 100644
--- a/aggregator-server/internal/api/handlers/agent_updates.go
+++ b/aggregator-server/internal/api/handlers/agent_updates.go
@@ -231,7 +231,7 @@ func (h *AgentUpdateHandler) UpdateAgent(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
// Rollback the updating status
h.agentQueries.UpdateAgentUpdatingStatus(req.AgentID, false, nil)
log.Printf("Failed to create update command for agent %s: %v", req.AgentID, err)
@@ -239,7 +239,28 @@ func (h *AgentUpdateHandler) UpdateAgent(c *gin.Context) {
return
}
- log.Printf("β
Agent update initiated for %s: %s (%s)", agent.Hostname, req.Version, req.Platform)
+ // Log agent update initiation to system_events table
+ event := &models.SystemEvent{
+ ID: uuid.New(),
+ AgentID: &agentIDUUID,
+ EventType: "agent_update",
+ EventSubtype: "initiated",
+ Severity: "info",
+ Component: "agent",
+ Message: fmt.Sprintf("Agent update initiated: %s -> %s (%s)", agent.CurrentVersion, req.Version, req.Platform),
+ Metadata: map[string]interface{}{
+ "old_version": agent.CurrentVersion,
+ "new_version": req.Version,
+ "platform": req.Platform,
+ "source": "web_ui",
+ },
+ CreatedAt: time.Now(),
+ }
+ if err := h.agentQueries.CreateSystemEvent(event); err != nil {
+ log.Printf("Warning: Failed to log agent update to system_events: %v", err)
+ }
+
+ log.Printf("[UPDATE] Agent update initiated for %s: %s -> %s (%s)", agent.Hostname, agent.CurrentVersion, req.Version, req.Platform)
response := models.AgentUpdateResponse{
Message: "Update initiated successfully",
@@ -345,7 +366,7 @@ func (h *AgentUpdateHandler) BulkUpdateAgents(c *gin.Context) {
command.Params["scheduled_at"] = *req.Scheduled
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
// Rollback status
h.agentQueries.UpdateAgentUpdatingStatus(agentID, false, nil)
errors = append(errors, fmt.Sprintf("Agent %s: failed to create command", agentID))
@@ -359,6 +380,27 @@ func (h *AgentUpdateHandler) BulkUpdateAgents(c *gin.Context) {
"status": "initiated",
})
+ // Log each bulk update initiation to system_events table
+ event := &models.SystemEvent{
+ ID: uuid.New(),
+ AgentID: &agentID,
+ EventType: "agent_update",
+ EventSubtype: "initiated",
+ Severity: "info",
+ Component: "agent",
+ Message: fmt.Sprintf("Agent update initiated (bulk): %s -> %s (%s)", agent.CurrentVersion, req.Version, req.Platform),
+ Metadata: map[string]interface{}{
+ "old_version": agent.CurrentVersion,
+ "new_version": req.Version,
+ "platform": req.Platform,
+ "source": "web_ui_bulk",
+ },
+ CreatedAt: time.Now(),
+ }
+ if err := h.agentQueries.CreateSystemEvent(event); err != nil {
+ log.Printf("Warning: Failed to log bulk agent update to system_events: %v", err)
+ }
+
log.Printf("β
Bulk update initiated for %s: %s (%s)", agent.Hostname, req.Version, req.Platform)
}
diff --git a/aggregator-server/internal/api/handlers/agents.go b/aggregator-server/internal/api/handlers/agents.go
index 2d10066..c250493 100644
--- a/aggregator-server/internal/api/handlers/agents.go
+++ b/aggregator-server/internal/api/handlers/agents.go
@@ -8,7 +8,10 @@ import (
"github.com/Fimeg/RedFlag/aggregator-server/internal/api/middleware"
"github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/logging"
"github.com/Fimeg/RedFlag/aggregator-server/internal/models"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/scheduler"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/services"
"github.com/Fimeg/RedFlag/aggregator-server/internal/utils"
"github.com/gin-gonic/gin"
"github.com/google/uuid"
@@ -20,22 +23,59 @@ type AgentHandler struct {
refreshTokenQueries *queries.RefreshTokenQueries
registrationTokenQueries *queries.RegistrationTokenQueries
subsystemQueries *queries.SubsystemQueries
+ scheduler *scheduler.Scheduler
+ signingService *services.SigningService
+ securityLogger *logging.SecurityLogger
checkInInterval int
latestAgentVersion string
}
-func NewAgentHandler(aq *queries.AgentQueries, cq *queries.CommandQueries, rtq *queries.RefreshTokenQueries, regTokenQueries *queries.RegistrationTokenQueries, sq *queries.SubsystemQueries, checkInInterval int, latestAgentVersion string) *AgentHandler {
+func NewAgentHandler(aq *queries.AgentQueries, cq *queries.CommandQueries, rtq *queries.RefreshTokenQueries, regTokenQueries *queries.RegistrationTokenQueries, sq *queries.SubsystemQueries, scheduler *scheduler.Scheduler, signingService *services.SigningService, securityLogger *logging.SecurityLogger, checkInInterval int, latestAgentVersion string) *AgentHandler {
return &AgentHandler{
agentQueries: aq,
commandQueries: cq,
refreshTokenQueries: rtq,
registrationTokenQueries: regTokenQueries,
subsystemQueries: sq,
+ scheduler: scheduler,
+ signingService: signingService,
+ securityLogger: securityLogger,
checkInInterval: checkInInterval,
latestAgentVersion: latestAgentVersion,
}
}
+// signAndCreateCommand signs a command if signing service is enabled, then stores it in the database
+func (h *AgentHandler) signAndCreateCommand(cmd *models.AgentCommand) error {
+ // Sign the command before storing
+ if h.signingService != nil && h.signingService.IsEnabled() {
+ signature, err := h.signingService.SignCommand(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to sign command: %w", err)
+ }
+ cmd.Signature = signature
+
+ // Log successful signing
+ if h.securityLogger != nil {
+ h.securityLogger.LogCommandSigned(cmd)
+ }
+ } else {
+ // Log warning if signing disabled
+ log.Printf("[WARNING] Command signing disabled, storing unsigned command")
+ if h.securityLogger != nil {
+ h.securityLogger.LogPrivateKeyNotConfigured()
+ }
+ }
+
+ // Store in database
+ err := h.commandQueries.CreateCommand(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to create command: %w", err)
+ }
+
+ return nil
+}
+
// RegisterAgent handles agent registration
func (h *AgentHandler) RegisterAgent(c *gin.Context) {
var req models.AgentRegistrationRequest
@@ -185,6 +225,47 @@ func (h *AgentHandler) GetCommands(c *gin.Context) {
log.Printf("DEBUG: Failed to parse metrics JSON: %v", err)
}
+ // Process buffered events from agent if present
+ if metrics.Metadata != nil {
+ if bufferedEvents, exists := metrics.Metadata["buffered_events"]; exists {
+ if events, ok := bufferedEvents.([]interface{}); ok && len(events) > 0 {
+ stored := 0
+ for _, e := range events {
+ if eventMap, ok := e.(map[string]interface{}); ok {
+ // Extract event fields with type safety
+ eventType := getStringFromMap(eventMap, "event_type")
+ eventSubtype := getStringFromMap(eventMap, "event_subtype")
+ severity := getStringFromMap(eventMap, "severity")
+ component := getStringFromMap(eventMap, "component")
+ message := getStringFromMap(eventMap, "message")
+
+ if eventType != "" && eventSubtype != "" && severity != "" {
+ event := &models.SystemEvent{
+ AgentID: &agentID,
+ EventType: eventType,
+ EventSubtype: eventSubtype,
+ Severity: severity,
+ Component: component,
+ Message: message,
+ Metadata: eventMap["metadata"].(map[string]interface{}),
+ CreatedAt: time.Now(),
+ }
+
+ if err := h.agentQueries.CreateSystemEvent(event); err != nil {
+ log.Printf("Warning: Failed to store buffered event: %v", err)
+ } else {
+ stored++
+ }
+ }
+ }
+ }
+ if stored > 0 {
+ log.Printf("Stored %d buffered events from agent %s", stored, agentID)
+ }
+ }
+ }
+ }
+
// Debug logging to see what we received
log.Printf("DEBUG: Received metrics - Version: '%s', CPU: %.2f, Memory: %.2f",
metrics.Version, metrics.CPUPercent, metrics.MemoryPercent)
@@ -355,9 +436,10 @@ func (h *AgentHandler) GetCommands(c *gin.Context) {
commandItems := make([]models.CommandItem, 0, len(commands))
for _, cmd := range commands {
commandItems = append(commandItems, models.CommandItem{
- ID: cmd.ID.String(),
- Type: cmd.CommandType,
- Params: cmd.Params,
+ ID: cmd.ID.String(),
+ Type: cmd.CommandType,
+ Params: cmd.Params,
+ Signature: cmd.Signature,
})
// Mark as sent
@@ -438,7 +520,7 @@ func (h *AgentHandler) GetCommands(c *gin.Context) {
CompletedAt: &now,
}
- if err := h.commandQueries.CreateCommand(auditCmd); err != nil {
+ if err := h.signAndCreateCommand(auditCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create audit command for stale heartbeat: %v", err)
} else {
log.Printf("[Heartbeat] Created audit trail for stale heartbeat cleanup (agent %s)", agentID)
@@ -456,6 +538,19 @@ func (h *AgentHandler) GetCommands(c *gin.Context) {
// Process command acknowledgments from agent
var acknowledgedIDs []string
if len(metrics.PendingAcknowledgments) > 0 {
+ // Debug: Check what commands exist for this agent
+ agentCommands, err := h.commandQueries.GetCommandsByAgentID(agentID)
+ if err != nil {
+ log.Printf("DEBUG: Failed to get commands for agent %s: %v", agentID, err)
+ } else {
+ log.Printf("DEBUG: Agent %s has %d total commands in database", agentID, len(agentCommands))
+ for _, cmd := range agentCommands {
+ if cmd.Status == "completed" || cmd.Status == "failed" || cmd.Status == "timed_out" {
+ log.Printf("DEBUG: Completed command found - ID: %s, Status: %s, Type: %s", cmd.ID, cmd.Status, cmd.CommandType)
+ }
+ }
+ }
+
log.Printf("DEBUG: Processing %d pending acknowledgments for agent %s: %v", len(metrics.PendingAcknowledgments), agentID, metrics.PendingAcknowledgments)
// Verify which commands from agent's pending list have been recorded
verified, err := h.commandQueries.VerifyCommandsCompleted(metrics.PendingAcknowledgments)
@@ -470,6 +565,19 @@ func (h *AgentHandler) GetCommands(c *gin.Context) {
}
}
+ // Hybrid Heartbeat: Check for scheduled subsystem jobs during heartbeat mode
+ // This ensures that even in heartbeat mode, scheduled scans can be triggered
+ if h.scheduler != nil {
+ // Only check for scheduled jobs if agent is in heartbeat mode (rapid polling enabled)
+ isHeartbeatMode := rapidPolling != nil && rapidPolling.Enabled
+ if isHeartbeatMode {
+ if err := h.checkAndCreateScheduledCommands(agentID); err != nil {
+ // Log error but don't fail the request - this is enhancement, not core functionality
+ log.Printf("[Heartbeat] Failed to check scheduled commands for agent %s: %v", agentID, err)
+ }
+ }
+ }
+
response := models.CommandsResponse{
Commands: commandItems,
RapidPolling: rapidPolling,
@@ -479,6 +587,94 @@ func (h *AgentHandler) GetCommands(c *gin.Context) {
c.JSON(http.StatusOK, response)
}
+// checkAndCreateScheduledCommands checks if any subsystem jobs are due for the agent
+// and creates commands for them using the scheduler (following Option A approach)
+func (h *AgentHandler) checkAndCreateScheduledCommands(agentID uuid.UUID) error {
+ // Get current subsystems for this agent from database
+ subsystems, err := h.subsystemQueries.GetSubsystems(agentID)
+ if err != nil {
+ return fmt.Errorf("failed to get subsystems: %w", err)
+ }
+
+ // Check each enabled subsystem with auto_run=true
+ now := time.Now()
+ jobsCreated := 0
+
+ for _, subsystem := range subsystems {
+ if !subsystem.Enabled || !subsystem.AutoRun {
+ continue
+ }
+
+ // Check if this subsystem job is due
+ var isDue bool
+ if subsystem.NextRunAt == nil {
+ // No next run time set, it's due
+ isDue = true
+ } else {
+ // Check if next run time has passed
+ isDue = subsystem.NextRunAt.Before(now) || subsystem.NextRunAt.Equal(now)
+ }
+
+ if isDue {
+ // Create the command using scheduler logic (reusing existing safeguards)
+ if err := h.createSubsystemCommand(agentID, subsystem); err != nil {
+ log.Printf("[Heartbeat] Failed to create command for %s subsystem: %v", subsystem.Subsystem, err)
+ continue
+ }
+ jobsCreated++
+
+ // Update next run time in database ONLY after successful command creation
+ if err := h.updateNextRunTime(agentID, subsystem); err != nil {
+ log.Printf("[Heartbeat] Failed to update next run time for %s subsystem: %v", subsystem.Subsystem, err)
+ }
+ }
+ }
+
+ if jobsCreated > 0 {
+ log.Printf("[Heartbeat] Created %d scheduled commands for agent %s", jobsCreated, agentID)
+ }
+
+ return nil
+}
+
+// createSubsystemCommand creates a subsystem scan command using scheduler's logic
+func (h *AgentHandler) createSubsystemCommand(agentID uuid.UUID, subsystem models.AgentSubsystem) error {
+ // Check backpressure: skip if agent has too many pending commands
+ pendingCount, err := h.commandQueries.CountPendingCommandsForAgent(agentID)
+ if err != nil {
+ return fmt.Errorf("failed to check pending commands: %w", err)
+ }
+
+ // Backpressure threshold (same as scheduler)
+ const backpressureThreshold = 10
+ if pendingCount >= backpressureThreshold {
+ return fmt.Errorf("agent has %d pending commands (threshold: %d), skipping", pendingCount, backpressureThreshold)
+ }
+
+ // Create the command using same format as scheduler
+ cmd := &models.AgentCommand{
+ ID: uuid.New(),
+ AgentID: agentID,
+ CommandType: fmt.Sprintf("scan_%s", subsystem.Subsystem),
+ Params: models.JSONB{},
+ Status: models.CommandStatusPending,
+ Source: models.CommandSourceSystem,
+ CreatedAt: time.Now(),
+ }
+
+ if err := h.signAndCreateCommand(cmd); err != nil {
+ return fmt.Errorf("failed to create command: %w", err)
+ }
+
+ return nil
+}
+
+// updateNextRunTime updates the last_run_at and next_run_at for a subsystem after creating a command
+func (h *AgentHandler) updateNextRunTime(agentID uuid.UUID, subsystem models.AgentSubsystem) error {
+ // Use the existing UpdateLastRun method which handles next_run_at calculation
+ return h.subsystemQueries.UpdateLastRun(agentID, subsystem.Subsystem)
+}
+
// ListAgents returns all agents with last scan information
func (h *AgentHandler) ListAgents(c *gin.Context) {
status := c.Query("status")
@@ -546,7 +742,7 @@ func (h *AgentHandler) TriggerScan(c *gin.Context) {
Source: models.CommandSourceManual,
}
- if err := h.commandQueries.CreateCommand(cmd); err != nil {
+ if err := h.signAndCreateCommand(cmd); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create command"})
return
}
@@ -591,7 +787,7 @@ func (h *AgentHandler) TriggerHeartbeat(c *gin.Context) {
Source: models.CommandSourceManual,
}
- if err := h.commandQueries.CreateCommand(cmd); err != nil {
+ if err := h.signAndCreateCommand(cmd); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create heartbeat command"})
return
}
@@ -786,7 +982,7 @@ func (h *AgentHandler) TriggerUpdate(c *gin.Context) {
Source: models.CommandSourceManual,
}
- if err := h.commandQueries.CreateCommand(cmd); err != nil {
+ if err := h.signAndCreateCommand(cmd); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create update command"})
return
}
@@ -827,6 +1023,15 @@ func (h *AgentHandler) RenewToken(c *gin.Context) {
log.Printf("Warning: Failed to update last_seen for agent %s: %v", req.AgentID, err)
}
+ // Update agent version if provided (for upgrade tracking)
+ if req.AgentVersion != "" {
+ if err := h.agentQueries.UpdateAgentVersion(req.AgentID, req.AgentVersion); err != nil {
+ log.Printf("Warning: Failed to update agent version during token renewal for agent %s: %v", req.AgentID, err)
+ } else {
+ log.Printf("Agent %s version updated to %s during token renewal", req.AgentID, req.AgentVersion)
+ }
+ }
+
// Update refresh token expiration (sliding window - reset to 90 days from now)
// This ensures active agents never need to re-register
newExpiry := time.Now().Add(90 * 24 * time.Hour)
@@ -1123,7 +1328,7 @@ func (h *AgentHandler) TriggerReboot(c *gin.Context) {
}
// Save command to database
- if err := h.commandQueries.CreateCommand(cmd); err != nil {
+ if err := h.signAndCreateCommand(cmd); err != nil {
log.Printf("Failed to create reboot command: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create reboot command"})
return
@@ -1179,3 +1384,13 @@ func (h *AgentHandler) GetAgentConfig(c *gin.Context) {
"version": time.Now().Unix(), // Simple version timestamp
})
}
+
+// getStringFromMap safely extracts a string value from a map
+func getStringFromMap(m map[string]interface{}, key string) string {
+ if val, exists := m[key]; exists {
+ if str, ok := val.(string); ok {
+ return str
+ }
+ }
+ return ""
+}
diff --git a/aggregator-server/internal/api/handlers/auth.go b/aggregator-server/internal/api/handlers/auth.go
index 8c18d1e..beda85a 100644
--- a/aggregator-server/internal/api/handlers/auth.go
+++ b/aggregator-server/internal/api/handlers/auth.go
@@ -6,23 +6,21 @@ import (
"time"
"github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
- "github.com/Fimeg/RedFlag/aggregator-server/internal/models"
"github.com/gin-gonic/gin"
"github.com/golang-jwt/jwt/v5"
- "github.com/google/uuid"
)
// AuthHandler handles authentication for the web dashboard
type AuthHandler struct {
- jwtSecret string
- userQueries *queries.UserQueries
+ jwtSecret string
+ adminQueries *queries.AdminQueries
}
// NewAuthHandler creates a new auth handler
-func NewAuthHandler(jwtSecret string, userQueries *queries.UserQueries) *AuthHandler {
+func NewAuthHandler(jwtSecret string, adminQueries *queries.AdminQueries) *AuthHandler {
return &AuthHandler{
- jwtSecret: jwtSecret,
- userQueries: userQueries,
+ jwtSecret: jwtSecret,
+ adminQueries: adminQueries,
}
}
@@ -34,15 +32,15 @@ type LoginRequest struct {
// LoginResponse represents a login response
type LoginResponse struct {
- Token string `json:"token"`
- User *models.User `json:"user"`
+ Token string `json:"token"`
+ User *queries.Admin `json:"user"`
}
// UserClaims represents JWT claims for web dashboard users
type UserClaims struct {
- UserID uuid.UUID `json:"user_id"`
- Username string `json:"username"`
- Role string `json:"role"`
+ UserID string `json:"user_id"`
+ Username string `json:"username"`
+ Role string `json:"role"`
jwt.RegisteredClaims
}
@@ -54,8 +52,8 @@ func (h *AuthHandler) Login(c *gin.Context) {
return
}
- // Validate credentials against database
- user, err := h.userQueries.VerifyCredentials(req.Username, req.Password)
+ // Validate credentials against database hash
+ admin, err := h.adminQueries.VerifyAdminCredentials(req.Username, req.Password)
if err != nil {
c.JSON(http.StatusUnauthorized, gin.H{"error": "invalid username or password"})
return
@@ -63,9 +61,9 @@ func (h *AuthHandler) Login(c *gin.Context) {
// Create JWT token for web dashboard
claims := UserClaims{
- UserID: user.ID,
- Username: user.Username,
- Role: user.Role,
+ UserID: fmt.Sprintf("%d", admin.ID),
+ Username: admin.Username,
+ Role: "admin", // Always admin for single-admin system
RegisteredClaims: jwt.RegisteredClaims{
ExpiresAt: jwt.NewNumericDate(time.Now().Add(24 * time.Hour)),
IssuedAt: jwt.NewNumericDate(time.Now()),
@@ -81,7 +79,7 @@ func (h *AuthHandler) Login(c *gin.Context) {
c.JSON(http.StatusOK, LoginResponse{
Token: tokenString,
- User: user,
+ User: admin,
})
}
diff --git a/aggregator-server/internal/api/handlers/build_orchestrator.go b/aggregator-server/internal/api/handlers/build_orchestrator.go
index a135cb2..dc90c1b 100644
--- a/aggregator-server/internal/api/handlers/build_orchestrator.go
+++ b/aggregator-server/internal/api/handlers/build_orchestrator.go
@@ -34,7 +34,7 @@ func NewAgentBuild(c *gin.Context) {
}
// Create config builder
- configBuilder := services.NewConfigBuilder(req.ServerURL)
+ configBuilder := services.NewConfigBuilder(req.ServerURL, nil)
// Build agent configuration
config, err := configBuilder.BuildAgentConfig(setupReq)
@@ -122,7 +122,7 @@ func UpgradeAgentBuild(c *gin.Context) {
}
// Create config builder
- configBuilder := services.NewConfigBuilder(req.ServerURL)
+ configBuilder := services.NewConfigBuilder(req.ServerURL, nil)
// Build agent configuration
config, err := configBuilder.BuildAgentConfig(setupReq)
diff --git a/aggregator-server/internal/api/handlers/docker.go b/aggregator-server/internal/api/handlers/docker.go
index abdbebe..c172b9a 100644
--- a/aggregator-server/internal/api/handlers/docker.go
+++ b/aggregator-server/internal/api/handlers/docker.go
@@ -1,11 +1,15 @@
package handlers
import (
+ "fmt"
+ "log"
"net/http"
"strconv"
"github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
"github.com/Fimeg/RedFlag/aggregator-server/internal/models"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/services"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/logging"
"github.com/gin-gonic/gin"
"github.com/google/uuid"
)
@@ -14,16 +18,51 @@ type DockerHandler struct {
updateQueries *queries.UpdateQueries
agentQueries *queries.AgentQueries
commandQueries *queries.CommandQueries
+ signingService *services.SigningService
+ securityLogger *logging.SecurityLogger
}
-func NewDockerHandler(uq *queries.UpdateQueries, aq *queries.AgentQueries, cq *queries.CommandQueries) *DockerHandler {
+func NewDockerHandler(uq *queries.UpdateQueries, aq *queries.AgentQueries, cq *queries.CommandQueries, signingService *services.SigningService, securityLogger *logging.SecurityLogger) *DockerHandler {
return &DockerHandler{
updateQueries: uq,
agentQueries: aq,
commandQueries: cq,
+ signingService: signingService,
+ securityLogger: securityLogger,
}
}
+// signAndCreateCommand signs a command if signing service is enabled, then stores it in the database
+func (h *DockerHandler) signAndCreateCommand(cmd *models.AgentCommand) error {
+ // Sign the command before storing
+ if h.signingService != nil && h.signingService.IsEnabled() {
+ signature, err := h.signingService.SignCommand(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to sign command: %w", err)
+ }
+ cmd.Signature = signature
+
+ // Log successful signing
+ if h.securityLogger != nil {
+ h.securityLogger.LogCommandSigned(cmd)
+ }
+ } else {
+ // Log warning if signing disabled
+ log.Printf("[WARNING] Command signing disabled, storing unsigned command")
+ if h.securityLogger != nil {
+ h.securityLogger.LogPrivateKeyNotConfigured()
+ }
+ }
+
+ // Store in database
+ err := h.commandQueries.CreateCommand(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to create command: %w", err)
+ }
+
+ return nil
+}
+
// GetContainers returns Docker containers and images across all agents
func (h *DockerHandler) GetContainers(c *gin.Context) {
// Parse query parameters
@@ -430,7 +469,7 @@ func (h *DockerHandler) InstallUpdate(c *gin.Context) {
Source: models.CommandSourceManual, // User-initiated Docker update
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create Docker update command"})
return
}
diff --git a/aggregator-server/internal/api/handlers/downloads.go b/aggregator-server/internal/api/handlers/downloads.go
index 7f1cfcb..267fd01 100644
--- a/aggregator-server/internal/api/handlers/downloads.go
+++ b/aggregator-server/internal/api/handlers/downloads.go
@@ -9,6 +9,7 @@ import (
"strings"
"github.com/Fimeg/RedFlag/aggregator-server/internal/config"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
"github.com/Fimeg/RedFlag/aggregator-server/internal/services"
"github.com/google/uuid"
"github.com/gin-gonic/gin"
@@ -19,13 +20,15 @@ type DownloadHandler struct {
agentDir string
config *config.Config
installTemplateService *services.InstallTemplateService
+ packageQueries *queries.PackageQueries
}
-func NewDownloadHandler(agentDir string, cfg *config.Config) *DownloadHandler {
+func NewDownloadHandler(agentDir string, cfg *config.Config, packageQueries *queries.PackageQueries) *DownloadHandler {
return &DownloadHandler{
agentDir: agentDir,
config: cfg,
installTemplateService: services.NewInstallTemplateService(),
+ packageQueries: packageQueries,
}
}
@@ -137,13 +140,58 @@ func (h *DownloadHandler) DownloadUpdatePackage(c *gin.Context) {
return
}
- // TODO: Implement actual package serving from database/filesystem
- // For now, return a placeholder response
- c.JSON(http.StatusNotImplemented, gin.H{
- "error": "Update package download not yet implemented",
- "package_id": packageID,
- "message": "This will serve the signed update package file",
- })
+ parsedPackageID, err := uuid.Parse(packageID)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid package ID format"})
+ return
+ }
+
+ // Fetch package from database
+ pkg, err := h.packageQueries.GetSignedPackageByID(parsedPackageID)
+ if err != nil {
+ if err.Error() == "update package not found" {
+ c.JSON(http.StatusNotFound, gin.H{
+ "error": "Package not found",
+ "package_id": packageID,
+ })
+ return
+ }
+
+ log.Printf("[ERROR] Failed to fetch package %s: %v", packageID, err)
+ c.JSON(http.StatusInternalServerError, gin.H{
+ "error": "Failed to retrieve package",
+ "package_id": packageID,
+ })
+ return
+ }
+
+ // Verify file exists on disk
+ if _, err := os.Stat(pkg.BinaryPath); os.IsNotExist(err) {
+ log.Printf("[ERROR] Package file not found on disk: %s", pkg.BinaryPath)
+ c.JSON(http.StatusNotFound, gin.H{
+ "error": "Package file not found on disk",
+ "package_id": packageID,
+ })
+ return
+ }
+
+ // Set appropriate headers
+ c.Header("Content-Type", "application/octet-stream")
+ c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filepath.Base(pkg.BinaryPath)))
+ c.Header("X-Package-Version", pkg.Version)
+ c.Header("X-Package-Platform", pkg.Platform)
+ c.Header("X-Package-Architecture", pkg.Architecture)
+
+ if pkg.Signature != "" {
+ c.Header("X-Package-Signature", pkg.Signature)
+ }
+
+ if pkg.Checksum != "" {
+ c.Header("X-Package-Checksum", pkg.Checksum)
+ }
+
+ // Serve the file
+ c.File(pkg.BinaryPath)
}
// InstallScript serves the installation script
diff --git a/aggregator-server/internal/api/handlers/registration_tokens.go b/aggregator-server/internal/api/handlers/registration_tokens.go
index 40bf505..f8bbd8f 100644
--- a/aggregator-server/internal/api/handlers/registration_tokens.go
+++ b/aggregator-server/internal/api/handlers/registration_tokens.go
@@ -1,6 +1,7 @@
package handlers
import (
+ "fmt"
"net/http"
"strconv"
"time"
@@ -48,8 +49,8 @@ func (h *RegistrationTokenHandler) GenerateRegistrationToken(c *gin.Context) {
if activeAgents >= h.config.AgentRegistration.MaxSeats {
c.JSON(http.StatusForbidden, gin.H{
- "error": "Maximum agent seats reached",
- "limit": h.config.AgentRegistration.MaxSeats,
+ "error": "Maximum agent seats reached",
+ "limit": h.config.AgentRegistration.MaxSeats,
"current": activeAgents,
})
return
@@ -106,14 +107,19 @@ func (h *RegistrationTokenHandler) GenerateRegistrationToken(c *gin.Context) {
if serverURL == "" {
serverURL = "localhost:8080" // Fallback for development
}
- installCommand := "curl -sfL https://" + serverURL + "/install | bash -s -- " + token
+ // Use http:// for localhost, correct API endpoint, and query parameter for token
+ protocol := "http://"
+ if serverURL != "localhost:8080" {
+ protocol = "https://"
+ }
+ installCommand := fmt.Sprintf("curl -sfL \"%s%s/api/v1/install/linux?token=%s\" | sudo bash", protocol, serverURL, token)
response := gin.H{
- "token": token,
- "label": request.Label,
- "expires_at": expiresAt,
+ "token": token,
+ "label": request.Label,
+ "expires_at": expiresAt,
"install_command": installCommand,
- "metadata": metadata,
+ "metadata": metadata,
}
c.JSON(http.StatusCreated, response)
@@ -178,8 +184,8 @@ func (h *RegistrationTokenHandler) ListRegistrationTokens(c *gin.Context) {
response := gin.H{
"tokens": tokens,
"pagination": gin.H{
- "page": page,
- "limit": limit,
+ "page": page,
+ "limit": limit,
"offset": offset,
},
"stats": stats,
@@ -324,14 +330,14 @@ func (h *RegistrationTokenHandler) GetTokenStats(c *gin.Context) {
"agent_usage": gin.H{
"active_agents": activeAgentCount,
"max_seats": h.config.AgentRegistration.MaxSeats,
- "available": h.config.AgentRegistration.MaxSeats - activeAgentCount,
+ "available": h.config.AgentRegistration.MaxSeats - activeAgentCount,
},
"security_limits": gin.H{
"max_tokens_per_request": h.config.AgentRegistration.MaxTokens,
- "max_token_duration": "7 days",
- "token_expiry_default": h.config.AgentRegistration.TokenExpiry,
+ "max_token_duration": "7 days",
+ "token_expiry_default": h.config.AgentRegistration.TokenExpiry,
},
}
c.JSON(http.StatusOK, response)
-}
\ No newline at end of file
+}
diff --git a/aggregator-server/internal/api/handlers/scanner_config.go b/aggregator-server/internal/api/handlers/scanner_config.go
new file mode 100644
index 0000000..a02e17e
--- /dev/null
+++ b/aggregator-server/internal/api/handlers/scanner_config.go
@@ -0,0 +1,146 @@
+package handlers
+
+import (
+ "log"
+ "net/http"
+ "time"
+
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
+ "github.com/gin-gonic/gin"
+ "github.com/google/uuid"
+ "github.com/jmoiron/sqlx"
+)
+
+// ScannerConfigHandler manages scanner timeout configuration
+type ScannerConfigHandler struct {
+ queries *queries.ScannerConfigQueries
+}
+
+// NewScannerConfigHandler creates a new scanner config handler
+func NewScannerConfigHandler(db *sqlx.DB) *ScannerConfigHandler {
+ return &ScannerConfigHandler{
+ queries: queries.NewScannerConfigQueries(db),
+ }
+}
+
+// GetScannerTimeouts returns current scanner timeout configuration
+// GET /api/v1/admin/scanner-timeouts
+// Security: Requires admin authentication (WebAuthMiddleware)
+func (h *ScannerConfigHandler) GetScannerTimeouts(c *gin.Context) {
+ configs, err := h.queries.GetAllScannerConfigs()
+ if err != nil {
+ log.Printf("[ERROR] Failed to fetch scanner configs: %v", err)
+ c.JSON(http.StatusInternalServerError, gin.H{
+ "error": "failed to fetch scanner configuration",
+ })
+ return
+ }
+
+ c.JSON(http.StatusOK, gin.H{
+ "scanner_timeouts": configs,
+ "default_timeout_ms": 1800000, // 30 minutes default
+ })
+}
+
+// UpdateScannerTimeout updates scanner timeout configuration
+// PUT /api/v1/admin/scanner-timeouts/:scanner_name
+// Security: Requires admin authentication + audit logging
+func (h *ScannerConfigHandler) UpdateScannerTimeout(c *gin.Context) {
+ scannerName := c.Param("scanner_name")
+ if scannerName == "" {
+ c.JSON(http.StatusBadRequest, gin.H{
+ "error": "scanner_name is required",
+ })
+ return
+ }
+
+ var req struct {
+ TimeoutMs int `json:"timeout_ms" binding:"required,min=1000,max=7200000"` // 1s to 2 hours
+ }
+
+ if err := c.ShouldBindJSON(&req); err != nil {
+ c.JSON(http.StatusBadRequest, gin.H{
+ "error": err.Error(),
+ })
+ return
+ }
+
+ timeout := time.Duration(req.TimeoutMs) * time.Millisecond
+
+ // Update config
+ if err := h.queries.UpsertScannerConfig(scannerName, timeout); err != nil {
+ log.Printf("[ERROR] Failed to update scanner config for %s: %v", scannerName, err)
+ c.JSON(http.StatusInternalServerError, gin.H{
+ "error": "failed to update scanner configuration",
+ })
+ return
+ }
+
+ // Create audit event in History table (ETHOS compliance)
+ userID := c.MustGet("user_id").(uuid.UUID)
+ /*
+ event := &models.SystemEvent{
+ ID: uuid.New(),
+ EventType: "scanner_config_change",
+ EventSubtype: "timeout_updated",
+ Severity: "info",
+ Component: "admin_api",
+ Message: fmt.Sprintf("Scanner timeout updated: %s = %v", scannerName, timeout),
+ Metadata: map[string]interface{}{
+ "scanner_name": scannerName,
+ "timeout_ms": req.TimeoutMs,
+ "user_id": userID.String(),
+ "source_ip": c.ClientIP(),
+ },
+ CreatedAt: time.Now(),
+ }
+ // TODO: Integrate with event logging system when available
+ */
+ log.Printf("[AUDIT] User %s updated scanner timeout: %s = %v", userID, scannerName, timeout)
+
+ c.JSON(http.StatusOK, gin.H{
+ "message": "scanner timeout updated successfully",
+ "scanner_name": scannerName,
+ "timeout_ms": req.TimeoutMs,
+ "timeout_human": timeout.String(),
+ })
+}
+
+// ResetScannerTimeout resets scanner timeout to default (30 minutes)
+// POST /api/v1/admin/scanner-timeouts/:scanner_name/reset
+// Security: Requires admin authentication + audit logging
+func (h *ScannerConfigHandler) ResetScannerTimeout(c *gin.Context) {
+ scannerName := c.Param("scanner_name")
+ if scannerName == "" {
+ c.JSON(http.StatusBadRequest, gin.H{
+ "error": "scanner_name is required",
+ })
+ return
+ }
+
+ defaultTimeout := 30 * time.Minute
+
+ if err := h.queries.UpsertScannerConfig(scannerName, defaultTimeout); err != nil {
+ log.Printf("[ERROR] Failed to reset scanner config for %s: %v", scannerName, err)
+ c.JSON(http.StatusInternalServerError, gin.H{
+ "error": "failed to reset scanner configuration",
+ })
+ return
+ }
+
+ // Audit log
+ userID := c.MustGet("user_id").(uuid.UUID)
+ log.Printf("[AUDIT] User %s reset scanner timeout: %s to default %v", userID, scannerName, defaultTimeout)
+
+ c.JSON(http.StatusOK, gin.H{
+ "message": "scanner timeout reset to default",
+ "scanner_name": scannerName,
+ "timeout_ms": int(defaultTimeout.Milliseconds()),
+ "timeout_human": defaultTimeout.String(),
+ })
+}
+
+// GetScannerConfigQueries provides access to the queries for config_builder.go
+func (h *ScannerConfigHandler) GetScannerConfigQueries() *queries.ScannerConfigQueries {
+ return h.queries
+}
diff --git a/aggregator-server/internal/api/handlers/setup.go b/aggregator-server/internal/api/handlers/setup.go
index 62f8214..33b1a0f 100644
--- a/aggregator-server/internal/api/handlers/setup.go
+++ b/aggregator-server/internal/api/handlers/setup.go
@@ -10,6 +10,7 @@ import (
"strconv"
"github.com/Fimeg/RedFlag/aggregator-server/internal/config"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/services"
"github.com/gin-gonic/gin"
"github.com/lib/pq"
_ "github.com/lib/pq"
@@ -64,16 +65,23 @@ func createSharedEnvContentForDisplay(req struct {
ServerHost string `json:"serverHost"`
ServerPort string `json:"serverPort"`
MaxSeats string `json:"maxSeats"`
-}, jwtSecret string) (string, error) {
+}, jwtSecret string, signingKeys map[string]string) (string, error) {
// Generate .env file content for user to copy
envContent := fmt.Sprintf(`# RedFlag Environment Configuration
-# Generated by web setup - Save this content to ./config/.env
+# Generated by web setup on 2025-12-13
+# [WARNING] SECURITY CRITICAL: Backup the signing key or you will lose access to all agents
# PostgreSQL Configuration (for PostgreSQL container)
POSTGRES_DB=%s
POSTGRES_USER=%s
POSTGRES_PASSWORD=%s
+# RedFlag Security - Ed25519 Signing Keys
+# These keys are used to cryptographically sign agent updates and commands
+# BACKUP THE PRIVATE KEY IMMEDIATELY - Store it in a secure location like a password manager
+REDFLAG_SIGNING_PRIVATE_KEY=%s
+REDFLAG_SIGNING_PUBLIC_KEY=%s
+
# RedFlag Server Configuration
REDFLAG_SERVER_HOST=%s
REDFLAG_SERVER_PORT=%s
@@ -87,8 +95,15 @@ REDFLAG_ADMIN_PASSWORD=%s
REDFLAG_JWT_SECRET=%s
REDFLAG_TOKEN_EXPIRY=24h
REDFLAG_MAX_TOKENS=100
-REDFLAG_MAX_SEATS=%s`,
+REDFLAG_MAX_SEATS=%s
+
+# Security Settings
+REDFLAG_SECURITY_COMMAND_SIGNING_ENFORCEMENT=strict
+REDFLAG_SECURITY_NONCE_TIMEOUT=600
+REDFLAG_SECURITY_LOG_LEVEL=warn
+`,
req.DBName, req.DBUser, req.DBPassword,
+ signingKeys["private_key"], signingKeys["public_key"],
req.ServerHost, req.ServerPort,
req.DBHost, req.DBPort, req.DBName, req.DBUser, req.DBPassword,
req.AdminUser, req.AdminPass, jwtSecret, req.MaxSeats)
@@ -136,7 +151,7 @@ func (h *SetupHandler) ShowSetupPage(c *gin.Context) {
@@ -199,7 +214,7 @@ func (h *SetupHandler) ShowSetupPage(c *gin.Context) {
- π Configure RedFlag Server
+ [START] Configure RedFlag Server
@@ -237,12 +252,12 @@ func (h *SetupHandler) ShowSetupPage(c *gin.Context) {
// Validate inputs
if (!formData.adminUser || !formData.adminPassword) {
- result.innerHTML = '
β Admin username and password are required
';
+ result.innerHTML = '
[ERROR] Admin username and password are required
';
return;
}
if (!formData.dbHost || !formData.dbPort || !formData.dbName || !formData.dbUser || !formData.dbPassword) {
- result.innerHTML = '
β All database fields are required
';
+ result.innerHTML = '
[ERROR] All database fields are required
';
return;
}
@@ -264,10 +279,10 @@ func (h *SetupHandler) ShowSetupPage(c *gin.Context) {
if (response.ok) {
let resultHtml = '
';
- resultHtml += '
β
Configuration Generated Successfully! ';
+ resultHtml += '
[SUCCESS] Configuration Generated Successfully! ';
resultHtml += '
Your JWT Secret: ' + resultData.jwtSecret + ' ';
resultHtml += 'π Copy
';
- resultHtml += '
β οΈ Important Next Steps:
';
+ resultHtml += '
[WARNING] Important Next Steps:
';
resultHtml += '
';
resultHtml += '
π§ Complete Setup Required:
';
resultHtml += '
';
@@ -292,12 +307,12 @@ func (h *SetupHandler) ShowSetupPage(c *gin.Context) {
window.envContent = resultData.envContent;
} else {
- result.innerHTML = 'β Error: ' + resultData.error + '
';
+ result.innerHTML = '[ERROR] Error: ' + resultData.error + '
';
submitBtn.disabled = false;
loading.style.display = 'none';
}
} catch (error) {
- result.innerHTML = 'β Network error: ' + error.message + '
';
+ result.innerHTML = '[ERROR] Network error: ' + error.message + '
';
submitBtn.disabled = false;
loading.style.display = 'none';
}
@@ -383,6 +398,22 @@ func (h *SetupHandler) ConfigureServer(c *gin.Context) {
return
}
+ // SECURITY: Generate Ed25519 signing keypair (critical for v0.2.x)
+ fmt.Println("[START] Generating Ed25519 signing keypair for security...")
+ signingPublicKey, signingPrivateKey, err := ed25519.GenerateKey(rand.Reader)
+ if err != nil {
+ fmt.Printf("CRITICAL ERROR: Failed to generate signing keys: %v\n", err)
+ c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to generate signing keys. Security features cannot be enabled."})
+ return
+ }
+
+ signingKeys := map[string]string{
+ "public_key": hex.EncodeToString(signingPublicKey),
+ "private_key": hex.EncodeToString(signingPrivateKey),
+ }
+ fmt.Printf("[SUCCESS] Generated Ed25519 keypair - Fingerprint: %s\n", signingKeys["public_key"][:16])
+ fmt.Println("[WARNING] SECURITY WARNING: Backup the private key immediately or you will lose access to all agents!")
+
// Step 1: Update PostgreSQL password from bootstrap to user password
fmt.Println("Updating PostgreSQL password from bootstrap to user-provided password...")
bootstrapPassword := "redflag_bootstrap" // This matches our bootstrap .env
@@ -401,7 +432,7 @@ func (h *SetupHandler) ConfigureServer(c *gin.Context) {
fmt.Println("Generating configuration content for manual .env file update...")
// Generate the complete .env file content for the user to copy
- newEnvContent, err := createSharedEnvContentForDisplay(req, jwtSecret)
+ newEnvContent, err := createSharedEnvContentForDisplay(req, jwtSecret, signingKeys)
if err != nil {
fmt.Printf("Failed to generate .env content: %v\n", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to generate configuration content"})
@@ -415,6 +446,8 @@ func (h *SetupHandler) ConfigureServer(c *gin.Context) {
"manualRestartRequired": true,
"manualRestartCommand": "docker-compose down && docker-compose up -d",
"configFilePath": "./config/.env",
+ "securityNotice": "[WARNING] A signing key has been generated. BACKUP THE PRIVATE KEY or you will lose access to all agents!",
+ "publicKeyFingerprint": signingKeys["public_key"][:16] + "...",
})
}
@@ -458,3 +491,98 @@ func (h *SetupHandler) GenerateSigningKeys(c *gin.Context) {
})
}
+
+// ConfigureSecrets creates all Docker secrets automatically
+func (h *SetupHandler) ConfigureSecrets(c *gin.Context) {
+ // Check if Docker API is available
+ if !services.IsDockerAvailable() {
+ c.JSON(http.StatusServiceUnavailable, gin.H{
+ "error": "Docker API not available",
+ "message": "Docker socket is not mounted. Please ensure the server can access Docker daemon",
+ })
+ return
+ }
+
+ // Create Docker secrets service
+ dockerSecrets, err := services.NewDockerSecretsService()
+ if err != nil {
+ c.JSON(http.StatusInternalServerError, gin.H{
+ "error": "Failed to connect to Docker",
+ "details": err.Error(),
+ })
+ return
+ }
+ defer dockerSecrets.Close()
+
+ // Generate all required secrets
+ type SecretConfig struct {
+ Name string
+ Value string
+ }
+
+ secrets := []SecretConfig{
+ {"redflag_admin_password", config.GenerateSecurePassword()},
+ {"redflag_jwt_secret", generateSecureJWTSecret()},
+ {"redflag_db_password", config.GenerateSecurePassword()},
+ }
+
+ // Try to create each secret
+ createdSecrets := []string{}
+ failedSecrets := []string{}
+
+ for _, secret := range secrets {
+ if err := dockerSecrets.CreateSecret(secret.Name, secret.Value); err != nil {
+ failedSecrets = append(failedSecrets, fmt.Sprintf("%s: %v", secret.Name, err))
+ } else {
+ createdSecrets = append(createdSecrets, secret.Name)
+ }
+ }
+
+ // Generate signing keys
+ publicKey, privateKey, err := ed25519.GenerateKey(rand.Reader)
+ if err != nil {
+ c.JSON(http.StatusInternalServerError, gin.H{
+ "error": "Failed to generate signing keys",
+ "details": err.Error(),
+ })
+ return
+ }
+
+ publicKeyHex := hex.EncodeToString(publicKey)
+ privateKeyHex := hex.EncodeToString(privateKey)
+
+ // Create signing key secret
+ if err := dockerSecrets.CreateSecret("redflag_signing_private_key", privateKeyHex); err != nil {
+ failedSecrets = append(failedSecrets, fmt.Sprintf("redflag_signing_private_key: %v", err))
+ } else {
+ createdSecrets = append(createdSecrets, "redflag_signing_private_key")
+ }
+
+ response := gin.H{
+ "created_secrets": createdSecrets,
+ "public_key": publicKeyHex,
+ "fingerprint": publicKeyHex[:16],
+ }
+
+ if len(failedSecrets) > 0 {
+ response["failed_secrets"] = failedSecrets
+ c.JSON(http.StatusMultiStatus, response)
+ return
+ }
+
+ c.JSON(http.StatusOK, response)
+}
+
+// GenerateSecurePassword generates a secure password for admin/db
+func generateSecurePassword() string {
+ bytes := make([]byte, 16)
+ rand.Read(bytes)
+ return hex.EncodeToString(bytes)[:16] // 16 character random password
+}
+
+// generateSecureJWTSecret generates a secure JWT secret
+func generateSecureJWTSecret() string {
+ bytes := make([]byte, 32)
+ rand.Read(bytes)
+ return hex.EncodeToString(bytes)
+}
diff --git a/aggregator-server/internal/api/handlers/subsystems.go b/aggregator-server/internal/api/handlers/subsystems.go
index c353d4e..344dd27 100644
--- a/aggregator-server/internal/api/handlers/subsystems.go
+++ b/aggregator-server/internal/api/handlers/subsystems.go
@@ -1,10 +1,14 @@
package handlers
import (
+ "fmt"
+ "log"
"net/http"
"github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
"github.com/Fimeg/RedFlag/aggregator-server/internal/models"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/services"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/logging"
"github.com/gin-gonic/gin"
"github.com/google/uuid"
)
@@ -12,15 +16,50 @@ import (
type SubsystemHandler struct {
subsystemQueries *queries.SubsystemQueries
commandQueries *queries.CommandQueries
+ signingService *services.SigningService
+ securityLogger *logging.SecurityLogger
}
-func NewSubsystemHandler(sq *queries.SubsystemQueries, cq *queries.CommandQueries) *SubsystemHandler {
+func NewSubsystemHandler(sq *queries.SubsystemQueries, cq *queries.CommandQueries, signingService *services.SigningService, securityLogger *logging.SecurityLogger) *SubsystemHandler {
return &SubsystemHandler{
subsystemQueries: sq,
commandQueries: cq,
+ signingService: signingService,
+ securityLogger: securityLogger,
}
}
+// signAndCreateCommand signs a command if signing service is enabled, then stores it in the database
+func (h *SubsystemHandler) signAndCreateCommand(cmd *models.AgentCommand) error {
+ // Sign the command before storing
+ if h.signingService != nil && h.signingService.IsEnabled() {
+ signature, err := h.signingService.SignCommand(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to sign command: %w", err)
+ }
+ cmd.Signature = signature
+
+ // Log successful signing
+ if h.securityLogger != nil {
+ h.securityLogger.LogCommandSigned(cmd)
+ }
+ } else {
+ // Log warning if signing disabled
+ log.Printf("[WARNING] Command signing disabled, storing unsigned command")
+ if h.securityLogger != nil {
+ h.securityLogger.LogPrivateKeyNotConfigured()
+ }
+ }
+
+ // Store in database
+ err := h.commandQueries.CreateCommand(cmd)
+ if err != nil {
+ return fmt.Errorf("failed to create command: %w", err)
+ }
+
+ return nil
+}
+
// GetSubsystems retrieves all subsystems for an agent
// GET /api/v1/agents/:id/subsystems
func (h *SubsystemHandler) GetSubsystems(c *gin.Context) {
@@ -205,7 +244,7 @@ func (h *SubsystemHandler) TriggerSubsystem(c *gin.Context) {
Source: "web_ui", // Manual trigger from UI
}
- err = h.commandQueries.CreateCommand(command)
+ err = h.signAndCreateCommand(command)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to create command"})
return
diff --git a/aggregator-server/internal/api/handlers/update_handler.go b/aggregator-server/internal/api/handlers/update_handler.go
index cbcabb0..43ced3d 100644
--- a/aggregator-server/internal/api/handlers/update_handler.go
+++ b/aggregator-server/internal/api/handlers/update_handler.go
@@ -281,6 +281,8 @@ func (h *UnifiedUpdateHandler) ReportLog(c *gin.Context) {
"duration_seconds": req.DurationSeconds,
"logged_at": time.Now(),
}
+ log.Printf("DEBUG: ReportLog - Marking command %s as completed for agent %s", commandID, agentID)
+
if req.Result == "success" || req.Result == "completed" {
if err := h.commandQueries.MarkCommandCompleted(commandID, result); err != nil {
@@ -446,12 +448,12 @@ func (h *UnifiedUpdateHandler) InstallUpdate(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(heartbeatCmd); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(heartbeatCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create heartbeat command for agent %s: %v", update.AgentID, err)
}
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create dry run command"})
return
}
@@ -518,12 +520,12 @@ func (h *UnifiedUpdateHandler) ReportDependencies(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(heartbeatCmd); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(heartbeatCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create heartbeat command for agent %s: %v", agentID, err)
}
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create installation command"})
return
}
@@ -592,12 +594,12 @@ func (h *UnifiedUpdateHandler) ConfirmDependencies(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(heartbeatCmd); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(heartbeatCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create heartbeat command for agent %s: %v", update.AgentID, err)
}
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create confirmation command"})
return
}
@@ -735,8 +737,32 @@ func (h *UnifiedUpdateHandler) RetryCommand(c *gin.Context) {
return
}
- newCommand, err := h.commandQueries.RetryCommand(id)
+ // Get the original command
+ original, err := h.commandQueries.GetCommandByID(id)
if err != nil {
+ c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("failed to get original command: %v", err)})
+ return
+ }
+
+ // Only allow retry of failed, timed_out, or cancelled commands
+ if original.Status != "failed" && original.Status != "timed_out" && original.Status != "cancelled" {
+ c.JSON(http.StatusBadRequest, gin.H{"error": "command must be failed, timed_out, or cancelled to retry"})
+ return
+ }
+
+ // Create new command with same parameters, linking it to the original
+ newCommand := &models.AgentCommand{
+ ID: uuid.New(),
+ AgentID: original.AgentID,
+ CommandType: original.CommandType,
+ Params: original.Params,
+ Status: models.CommandStatusPending,
+ CreatedAt: time.Now(),
+ RetriedFromID: &id,
+ }
+
+ // Sign and store the new command
+ if err := h.agentHandler.signAndCreateCommand(newCommand); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("failed to retry command: %v", err)})
return
}
diff --git a/aggregator-server/internal/api/handlers/updates.go b/aggregator-server/internal/api/handlers/updates.go
index 95d7214..8d0ad36 100644
--- a/aggregator-server/internal/api/handlers/updates.go
+++ b/aggregator-server/internal/api/handlers/updates.go
@@ -484,7 +484,7 @@ func (h *UpdateHandler) InstallUpdate(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(heartbeatCmd); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(heartbeatCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create heartbeat command for agent %s: %v", update.AgentID, err)
} else {
log.Printf("[Heartbeat] Command created for agent %s before dry run", update.AgentID)
@@ -494,7 +494,7 @@ func (h *UpdateHandler) InstallUpdate(c *gin.Context) {
}
// Store the dry run command in database
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create dry run command"})
return
}
@@ -591,7 +591,7 @@ func (h *UpdateHandler) ReportDependencies(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(heartbeatCmd); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(heartbeatCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create heartbeat command for agent %s: %v", agentID, err)
} else {
log.Printf("[Heartbeat] Command created for agent %s before installation", agentID)
@@ -600,7 +600,7 @@ func (h *UpdateHandler) ReportDependencies(c *gin.Context) {
log.Printf("[Heartbeat] Skipping heartbeat command for agent %s (already active)", agentID)
}
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create installation command"})
return
}
@@ -673,7 +673,7 @@ func (h *UpdateHandler) ConfirmDependencies(c *gin.Context) {
CreatedAt: time.Now(),
}
- if err := h.commandQueries.CreateCommand(heartbeatCmd); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(heartbeatCmd); err != nil {
log.Printf("[Heartbeat] Warning: Failed to create heartbeat command for agent %s: %v", update.AgentID, err)
} else {
log.Printf("[Heartbeat] Command created for agent %s before confirm dependencies", update.AgentID)
@@ -683,7 +683,7 @@ func (h *UpdateHandler) ConfirmDependencies(c *gin.Context) {
}
// Store the command in database
- if err := h.commandQueries.CreateCommand(command); err != nil {
+ if err := h.agentHandler.signAndCreateCommand(command); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create confirmation command"})
return
}
diff --git a/aggregator-server/internal/common/agentfile.go b/aggregator-server/internal/common/agentfile.go
new file mode 100644
index 0000000..9f970bd
--- /dev/null
+++ b/aggregator-server/internal/common/agentfile.go
@@ -0,0 +1,44 @@
+package common
+
+import (
+ "crypto/sha256"
+ "encoding/hex"
+ "os"
+ "time"
+)
+
+type AgentFile struct {
+ Path string `json:"path"`
+ Size int64 `json:"size"`
+ ModifiedTime time.Time `json:"modified_time"`
+ Version string `json:"version,omitempty"`
+ Checksum string `json:"checksum"`
+ Required bool `json:"required"`
+ Migrate bool `json:"migrate"`
+ Description string `json:"description"`
+}
+
+// CalculateChecksum computes SHA256 checksum of a file
+func CalculateChecksum(filePath string) (string, error) {
+ data, err := os.ReadFile(filePath)
+ if err != nil {
+ return "", err
+ }
+ hash := sha256.Sum256(data)
+ return hex.EncodeToString(hash[:]), nil
+}
+
+// IsRequiredFile determines if a file is required for agent operation
+func IsRequiredFile(path string) bool {
+ requiredFiles := []string{
+ "/etc/redflag/config.json",
+ "/usr/local/bin/redflag-agent",
+ "/etc/systemd/system/redflag-agent.service",
+ }
+ for _, rf := range requiredFiles {
+ if path == rf {
+ return true
+ }
+ }
+ return false
+}
diff --git a/aggregator-server/internal/config/config.go b/aggregator-server/internal/config/config.go
index 46c7a71..5814988 100644
--- a/aggregator-server/internal/config/config.go
+++ b/aggregator-server/internal/config/config.go
@@ -5,7 +5,9 @@ import (
"encoding/hex"
"fmt"
"os"
+ "path/filepath"
"strconv"
+ "strings"
)
// Config holds the application configuration
@@ -29,6 +31,7 @@ type Config struct {
}
Admin struct {
Username string `env:"REDFLAG_ADMIN_USER" default:"admin"`
+ Email string `env:"REDFLAG_ADMIN_EMAIL" default:"admin@example.com"`
Password string `env:"REDFLAG_ADMIN_PASSWORD"`
JWTSecret string `env:"REDFLAG_JWT_SECRET"`
}
@@ -44,16 +47,80 @@ type Config struct {
MinAgentVersion string `env:"MIN_AGENT_VERSION" default:"0.1.22"`
SigningPrivateKey string `env:"REDFLAG_SIGNING_PRIVATE_KEY"`
DebugEnabled bool `env:"REDFLAG_DEBUG" default:"false"` // Enable debug logging
+ SecurityLogging struct {
+ Enabled bool `env:"REDFLAG_SECURITY_LOG_ENABLED" default:"true"`
+ Level string `env:"REDFLAG_SECURITY_LOG_LEVEL" default:"warning"` // none, error, warn, info, debug
+ LogSuccesses bool `env:"REDFLAG_SECURITY_LOG_SUCCESSES" default:"false"`
+ FilePath string `env:"REDFLAG_SECURITY_LOG_PATH" default:"/var/log/redflag/security.json"`
+ MaxSizeMB int `env:"REDFLAG_SECURITY_LOG_MAX_SIZE" default:"100"`
+ MaxFiles int `env:"REDFLAG_SECURITY_LOG_MAX_FILES" default:"10"`
+ RetentionDays int `env:"REDFLAG_SECURITY_LOG_RETENTION" default:"90"`
+ LogToDatabase bool `env:"REDFLAG_SECURITY_LOG_TO_DB" default:"true"`
+ HashIPAddresses bool `env:"REDFLAG_SECURITY_LOG_HASH_IP" default:"true"`
+ }
}
-// Load reads configuration from environment variables only (immutable configuration)
-func Load() (*Config, error) {
- fmt.Printf("[CONFIG] Loading configuration from environment variables\n")
+// IsDockerSecretsMode returns true if the application is running in Docker secrets mode
+func IsDockerSecretsMode() bool {
+ // Check if we're running in Docker and secrets are available
+ if _, err := os.Stat("/run/secrets"); err == nil {
+ // Also check if any RedFlag secrets exist
+ if _, err := os.Stat("/run/secrets/redflag_admin_password"); err == nil {
+ return true
+ }
+ }
+ // Check environment variable override
+ return os.Getenv("REDFLAG_SECRETS_MODE") == "true"
+}
- cfg := &Config{}
+// getSecretPath returns the full path to a Docker secret file
+func getSecretPath(secretName string) string {
+ return filepath.Join("/run/secrets", secretName)
+}
+
+// loadFromSecrets reads configuration from Docker secrets
+func loadFromSecrets(cfg *Config) error {
+ // Note: For Docker secrets, we need to map environment variables differently
+ // Docker secrets appear as files that contain the secret value
+ fmt.Printf("[CONFIG] Loading configuration from Docker secrets\n")
+
+ // Load sensitive values from Docker secrets
+ if password, err := readSecretFile("redflag_admin_password"); err == nil && password != "" {
+ cfg.Admin.Password = password
+ fmt.Printf("[CONFIG] [OK] Admin password loaded from Docker secret\n")
+ }
+
+ if jwtSecret, err := readSecretFile("redflag_jwt_secret"); err == nil && jwtSecret != "" {
+ cfg.Admin.JWTSecret = jwtSecret
+ fmt.Printf("[CONFIG] [OK] JWT secret loaded from Docker secret\n")
+ }
+
+ if dbPassword, err := readSecretFile("redflag_db_password"); err == nil && dbPassword != "" {
+ cfg.Database.Password = dbPassword
+ fmt.Printf("[CONFIG] [OK] Database password loaded from Docker secret\n")
+ }
+
+ if signingKey, err := readSecretFile("redflag_signing_private_key"); err == nil && signingKey != "" {
+ cfg.SigningPrivateKey = signingKey
+ fmt.Printf("[CONFIG] [OK] Signing private key loaded from Docker secret (%d characters)\n", len(signingKey))
+ }
+
+ // For other configuration, fall back to environment variables
+ // This allows mixing secrets (for sensitive data) with env vars (for non-sensitive config)
+ return loadFromEnv(cfg, true)
+}
+
+// loadFromEnv reads configuration from environment variables
+// If skipSensitive=true, it won't override values that might have come from secrets
+func loadFromEnv(cfg *Config, skipSensitive bool) error {
+ if !skipSensitive {
+ fmt.Printf("[CONFIG] Loading configuration from environment variables\n")
+ }
// Parse server configuration
- cfg.Server.Host = getEnv("REDFLAG_SERVER_HOST", "0.0.0.0")
+ if !skipSensitive || cfg.Server.Host == "" {
+ cfg.Server.Host = getEnv("REDFLAG_SERVER_HOST", "0.0.0.0")
+ }
serverPort, _ := strconv.Atoi(getEnv("REDFLAG_SERVER_PORT", "8080"))
cfg.Server.Port = serverPort
cfg.Server.PublicURL = getEnv("REDFLAG_PUBLIC_URL", "") // Optional external URL
@@ -67,12 +134,18 @@ func Load() (*Config, error) {
cfg.Database.Port = dbPort
cfg.Database.Database = getEnv("REDFLAG_DB_NAME", "redflag")
cfg.Database.Username = getEnv("REDFLAG_DB_USER", "redflag")
- cfg.Database.Password = getEnv("REDFLAG_DB_PASSWORD", "")
+
+ // Only load password from env if we're not skipping sensitive data
+ if !skipSensitive {
+ cfg.Database.Password = getEnv("REDFLAG_DB_PASSWORD", "")
+ }
// Parse admin configuration
cfg.Admin.Username = getEnv("REDFLAG_ADMIN_USER", "admin")
- cfg.Admin.Password = getEnv("REDFLAG_ADMIN_PASSWORD", "")
- cfg.Admin.JWTSecret = getEnv("REDFLAG_JWT_SECRET", "")
+ if !skipSensitive {
+ cfg.Admin.Password = getEnv("REDFLAG_ADMIN_PASSWORD", "")
+ cfg.Admin.JWTSecret = getEnv("REDFLAG_JWT_SECRET", "")
+ }
// Parse agent registration configuration
cfg.AgentRegistration.TokenExpiry = getEnv("REDFLAG_TOKEN_EXPIRY", "24h")
@@ -87,40 +160,49 @@ func Load() (*Config, error) {
cfg.CheckInInterval = checkInInterval
cfg.OfflineThreshold = offlineThreshold
cfg.Timezone = getEnv("TIMEZONE", "UTC")
- cfg.LatestAgentVersion = getEnv("LATEST_AGENT_VERSION", "0.1.23.5")
+ cfg.LatestAgentVersion = getEnv("LATEST_AGENT_VERSION", "0.1.23.6")
cfg.MinAgentVersion = getEnv("MIN_AGENT_VERSION", "0.1.22")
- cfg.SigningPrivateKey = getEnv("REDFLAG_SIGNING_PRIVATE_KEY", "")
- // Debug: Log signing key status
- if cfg.SigningPrivateKey != "" {
- fmt.Printf("[CONFIG] β
Ed25519 signing private key configured (%d characters)\n", len(cfg.SigningPrivateKey))
- } else {
- fmt.Printf("[CONFIG] β No Ed25519 signing private key found in REDFLAG_SIGNING_PRIVATE_KEY\n")
+ if !skipSensitive {
+ cfg.SigningPrivateKey = getEnv("REDFLAG_SIGNING_PRIVATE_KEY", "")
}
- // Handle missing secrets
- if cfg.Admin.Password == "" || cfg.Admin.JWTSecret == "" || cfg.Database.Password == "" {
- fmt.Printf("[WARNING] Missing required configuration (admin password, JWT secret, or database password)\n")
- fmt.Printf("[INFO] Run: ./redflag-server --setup to configure\n")
- return nil, fmt.Errorf("missing required configuration")
+ return nil
+}
+
+// readSecretFile reads a Docker secret from /run/secrets/ directory
+func readSecretFile(secretName string) (string, error) {
+ path := getSecretPath(secretName)
+ data, err := os.ReadFile(path)
+ if err != nil {
+ return "", fmt.Errorf("failed to read secret %s from %s: %w", secretName, path, err)
+ }
+ return strings.TrimSpace(string(data)), nil
+}
+
+// Load reads configuration from Docker secrets or environment variables
+func Load() (*Config, error) {
+ // Check if we're in Docker secrets mode
+ if IsDockerSecretsMode() {
+ fmt.Printf("[CONFIG] Detected Docker secrets mode\n")
+ cfg := &Config{}
+ if err := loadFromSecrets(cfg); err != nil {
+ return nil, fmt.Errorf("failed to load configuration from secrets: %w", err)
+ }
+ return cfg, nil
}
- // Check if we're using bootstrap defaults that need to be replaced
- if cfg.Admin.Password == "changeme" || cfg.Admin.JWTSecret == "bootstrap-jwt-secret-replace-in-setup" || cfg.Database.Password == "redflag_bootstrap" {
- fmt.Printf("[INFO] Server running with bootstrap configuration - setup required\n")
- fmt.Printf("[INFO] Configure via web interface at: http://localhost:8080/setup\n")
- return nil, fmt.Errorf("bootstrap configuration detected - setup required")
- }
-
- // Validate JWT secret is not the development default
- if cfg.Admin.JWTSecret == "test-secret-for-development-only" {
- fmt.Printf("[SECURITY WARNING] Using development JWT secret\n")
- fmt.Printf("[INFO] Run: ./redflag-server --setup to configure production secrets\n")
+ // Default to environment variable mode
+ cfg := &Config{}
+ if err := loadFromEnv(cfg, false); err != nil {
+ return nil, fmt.Errorf("failed to load configuration from environment: %w", err)
}
+ // Continue with the rest of the validation...
return cfg, nil
}
+
// RunSetupWizard is deprecated - configuration is now handled via web interface
func RunSetupWizard() error {
return fmt.Errorf("CLI setup wizard is deprecated. Please use the web interface at http://localhost:8080/setup for configuration")
@@ -133,7 +215,18 @@ func getEnv(key, defaultValue string) string {
return defaultValue
}
-
+// GenerateSecurePassword generates a secure password (16 characters)
+func GenerateSecurePassword() string {
+ bytes := make([]byte, 16)
+ rand.Read(bytes)
+ // Use alphanumeric characters for better UX
+ chars := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
+ result := make([]byte, 16)
+ for i := range result {
+ result[i] = chars[int(bytes[i])%len(chars)]
+ }
+ return string(result)
+}
// GenerateSecureToken generates a cryptographically secure random token
func GenerateSecureToken() (string, error) {
diff --git a/aggregator-server/internal/database/migrations/018_create_scanner_config_table.sql b/aggregator-server/internal/database/migrations/018_create_scanner_config_table.sql
new file mode 100644
index 0000000..5937cc0
--- /dev/null
+++ b/aggregator-server/internal/database/migrations/018_create_scanner_config_table.sql
@@ -0,0 +1,34 @@
+-- migration 018: Create scanner_config table for user-configurable scanner timeouts
+-- This enables admin users to adjust scanner timeouts per subsystem via web UI
+
+CREATE TABLE IF NOT EXISTS scanner_config (
+ scanner_name VARCHAR(50) PRIMARY KEY,
+ timeout_ms BIGINT NOT NULL, -- Timeout in milliseconds
+ updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
+
+ CHECK (timeout_ms > 0 AND timeout_ms <= 7200000) -- Max 2 hours (7200000ms)
+);
+
+COMMENT ON TABLE scanner_config IS 'Stores user-configurable scanner timeout values';
+COMMENT ON COLUMN scanner_config.scanner_name IS 'Name of the scanner (dnf, apt, docker, etc.)';
+COMMENT ON COLUMN scanner_config.timeout_ms IS 'Timeout in milliseconds (1s = 1000ms)';
+COMMENT ON COLUMN scanner_config.updated_at IS 'When this configuration was last modified';
+
+-- Create index on updated_at for efficient querying of recently changed configs
+CREATE INDEX IF NOT EXISTS idx_scanner_config_updated_at ON scanner_config(updated_at);
+
+-- Insert default timeout values for all scanners
+-- 30 minutes (1800000ms) is the new default for package scanners
+INSERT INTO scanner_config (scanner_name, timeout_ms) VALUES
+ ('system', 10000), -- 10 seconds for system metrics
+ ('storage', 10000), -- 10 seconds for storage scan
+ ('apt', 1800000), -- 30 minutes for APT
+ ('dnf', 1800000), -- 30 minutes for DNF
+ ('docker', 60000), -- 60 seconds for Docker
+ ('windows', 600000), -- 10 minutes for Windows Updates
+ ('winget', 120000), -- 2 minutes for Winget
+ ('updates', 30000) -- 30 seconds for virtual update subsystem
+ON CONFLICT (scanner_name) DO NOTHING;
+
+-- Grant permissions
+GRANT SELECT, INSERT, UPDATE, DELETE ON scanner_config TO redflag_user;
diff --git a/aggregator-server/internal/database/migrations/019_create_system_events_table.up.sql b/aggregator-server/internal/database/migrations/019_create_system_events_table.up.sql
new file mode 100644
index 0000000..7102022
--- /dev/null
+++ b/aggregator-server/internal/database/migrations/019_create_system_events_table.up.sql
@@ -0,0 +1,39 @@
+-- Migration: Create system_events table for unified event logging
+-- Reference: docs/ERROR_FLOW_AUDIT.md
+
+CREATE TABLE IF NOT EXISTS system_events (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ agent_id UUID REFERENCES agents(id) ON DELETE CASCADE,
+ event_type VARCHAR(50) NOT NULL, -- 'agent_update', 'agent_startup', 'agent_scan', 'server_build', etc.
+ event_subtype VARCHAR(50) NOT NULL, -- 'success', 'failed', 'info', 'warning', 'critical'
+ severity VARCHAR(20) NOT NULL, -- 'info', 'warning', 'error', 'critical'
+ component VARCHAR(50) NOT NULL, -- 'agent', 'server', 'build', 'download', 'config', etc.
+ message TEXT,
+ metadata JSONB DEFAULT '{}', -- Structured event data (stack traces, HTTP codes, etc.)
+ created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
+);
+
+-- Performance indexes for common query patterns
+CREATE INDEX idx_system_events_agent_id ON system_events(agent_id);
+CREATE INDEX idx_system_events_type_subtype ON system_events(event_type, event_subtype);
+CREATE INDEX idx_system_events_created_at ON system_events(created_at DESC);
+CREATE INDEX idx_system_events_severity ON system_events(severity);
+CREATE INDEX idx_system_events_component ON system_events(component);
+
+-- Composite index for agent timeline queries (agent + time range)
+CREATE INDEX idx_system_events_agent_timeline ON system_events(agent_id, created_at DESC);
+
+-- Partial index for error events (faster error dashboard queries)
+CREATE INDEX idx_system_events_errors ON system_events(severity, created_at DESC)
+WHERE severity IN ('error', 'critical');
+
+-- GIN index for metadata JSONB queries (allows searching event metadata)
+CREATE INDEX idx_system_events_metadata_gin ON system_events USING GIN(metadata);
+
+-- Comment for documentation
+COMMENT ON TABLE system_events IS 'Unified event logging table for all system events (agent + server)';
+COMMENT ON COLUMN system_events.event_type IS 'High-level event category (e.g., agent_update, agent_startup)';
+COMMENT ON COLUMN system_events.event_subtype IS 'Event outcome/status (e.g., success, failed, info, warning)';
+COMMENT ON COLUMN system_events.severity IS 'Event severity level for filtering and alerting';
+COMMENT ON COLUMN system_events.component IS 'System component that generated the event';
+COMMENT ON COLUMN system_events.metadata IS 'JSONB field for structured event data (stack traces, HTTP codes, etc.)';
\ No newline at end of file
diff --git a/aggregator-server/internal/database/migrations/020_add_command_signatures.down.sql b/aggregator-server/internal/database/migrations/020_add_command_signatures.down.sql
new file mode 100644
index 0000000..de2fde0
--- /dev/null
+++ b/aggregator-server/internal/database/migrations/020_add_command_signatures.down.sql
@@ -0,0 +1,26 @@
+-- Down Migration: Remove security features for RedFlag v0.2.x
+-- Purpose: Rollback migration 020 - remove security-related tables and columns
+
+-- Drop indexes first
+DROP INDEX IF EXISTS idx_security_settings_category;
+DROP INDEX IF EXISTS idx_security_settings_restart;
+DROP INDEX IF EXISTS idx_security_audit_timestamp;
+DROP INDEX IF EXISTS idx_security_incidents_type;
+DROP INDEX IF EXISTS idx_security_incidents_severity;
+DROP INDEX IF EXISTS idx_security_incidents_resolved;
+DROP INDEX IF EXISTS idx_signing_keys_active;
+DROP INDEX IF EXISTS idx_signing_keys_algorithm;
+
+-- Drop check constraints
+ALTER TABLE security_settings DROP CONSTRAINT IF EXISTS chk_value_type;
+ALTER TABLE security_incidents DROP CONSTRAINT IF EXISTS chk_incident_severity;
+ALTER TABLE signing_keys DROP CONSTRAINT IF EXISTS chk_algorithm;
+
+-- Drop tables in reverse order to avoid foreign key constraints
+DROP TABLE IF EXISTS signing_keys;
+DROP TABLE IF EXISTS security_incidents;
+DROP TABLE IF EXISTS security_settings_audit;
+DROP TABLE IF EXISTS security_settings;
+
+-- Remove signature column from agent_commands table
+ALTER TABLE agent_commands DROP COLUMN IF EXISTS signature;
\ No newline at end of file
diff --git a/aggregator-server/internal/database/migrations/020_add_command_signatures.up.sql b/aggregator-server/internal/database/migrations/020_add_command_signatures.up.sql
new file mode 100644
index 0000000..408fef3
--- /dev/null
+++ b/aggregator-server/internal/database/migrations/020_add_command_signatures.up.sql
@@ -0,0 +1,106 @@
+-- Migration: Add security features for RedFlag v0.2.x
+-- Purpose: Add command signatures, security settings, audit trail, incidents tracking, and signing keys
+
+-- Add signature column to agent_commands table
+ALTER TABLE agent_commands ADD COLUMN IF NOT EXISTS signature VARCHAR(128);
+
+-- Create security_settings table for user-configurable settings
+CREATE TABLE IF NOT EXISTS security_settings (
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
+ category VARCHAR(50) NOT NULL,
+ key VARCHAR(100) NOT NULL,
+ value JSONB NOT NULL,
+ value_type VARCHAR(20) NOT NULL,
+ requires_restart BOOLEAN DEFAULT false,
+ updated_at TIMESTAMP DEFAULT NOW(),
+ updated_by UUID REFERENCES users(id),
+ is_encrypted BOOLEAN DEFAULT false,
+ description TEXT,
+ validation_rules JSONB,
+ UNIQUE(category, key)
+);
+
+-- Create security_settings_audit table for audit trail
+CREATE TABLE IF NOT EXISTS security_settings_audit (
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
+ setting_id UUID REFERENCES security_settings(id),
+ previous_value JSONB,
+ new_value JSONB,
+ changed_by UUID REFERENCES users(id),
+ changed_at TIMESTAMP DEFAULT NOW(),
+ ip_address INET,
+ user_agent TEXT,
+ reason TEXT
+);
+
+-- Create security_incidents table for tracking security events
+CREATE TABLE IF NOT EXISTS security_incidents (
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
+ incident_type VARCHAR(50) NOT NULL,
+ severity VARCHAR(20) NOT NULL,
+ agent_id UUID REFERENCES agents(id),
+ description TEXT NOT NULL,
+ metadata JSONB,
+ resolved BOOLEAN DEFAULT false,
+ resolved_at TIMESTAMP,
+ resolved_by UUID REFERENCES users(id),
+ created_at TIMESTAMP DEFAULT NOW()
+);
+
+-- Create signing_keys table for public key rotation
+CREATE TABLE IF NOT EXISTS signing_keys (
+ id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
+ key_id VARCHAR(64) UNIQUE NOT NULL,
+ public_key TEXT NOT NULL,
+ algorithm VARCHAR(20) DEFAULT 'ed25519',
+ is_active BOOLEAN DEFAULT true,
+ is_primary BOOLEAN DEFAULT false,
+ created_at TIMESTAMP DEFAULT NOW(),
+ deprecated_at TIMESTAMP,
+ version INTEGER DEFAULT 1
+);
+
+-- Create indexes for security_settings
+CREATE INDEX IF NOT EXISTS idx_security_settings_category ON security_settings(category);
+CREATE INDEX IF NOT EXISTS idx_security_settings_restart ON security_settings(requires_restart);
+
+-- Create indexes for security_settings_audit
+CREATE INDEX IF NOT EXISTS idx_security_audit_timestamp ON security_settings_audit(changed_at DESC);
+
+-- Create indexes for security_incidents
+CREATE INDEX IF NOT EXISTS idx_security_incidents_type ON security_incidents(incident_type);
+CREATE INDEX IF NOT EXISTS idx_security_incidents_severity ON security_incidents(severity);
+CREATE INDEX IF NOT EXISTS idx_security_incidents_resolved ON security_incidents(resolved);
+
+-- Create indexes for signing_keys
+CREATE INDEX IF NOT EXISTS idx_signing_keys_active ON signing_keys(is_active, is_primary);
+CREATE INDEX IF NOT EXISTS idx_signing_keys_algorithm ON signing_keys(algorithm);
+
+-- Add comments for documentation
+COMMENT ON TABLE security_settings IS 'Stores user-configurable security settings for the RedFlag system';
+COMMENT ON TABLE security_settings_audit IS 'Audit trail for all changes to security settings';
+COMMENT ON TABLE security_incidents IS 'Tracks security incidents and events in the system';
+COMMENT ON TABLE signing_keys IS 'Stores public signing keys with support for key rotation';
+
+COMMENT ON COLUMN agent_commands.signature IS 'Digital signature of the command for verification';
+COMMENT ON COLUMN security_settings.is_encrypted IS 'Indicates if the setting value should be encrypted at rest';
+COMMENT ON COLUMN security_settings.validation_rules IS 'JSON schema for validating the setting value';
+COMMENT ON COLUMN security_settings_audit.ip_address IS 'IP address of the user who made the change';
+COMMENT ON COLUMN security_settings_audit.reason IS 'Optional reason for the configuration change';
+COMMENT ON COLUMN security_incidents.metadata IS 'Additional structured data about the incident';
+COMMENT ON COLUMN signing_keys.key_id IS 'Unique identifier for the signing key (e.g., fingerprint)';
+COMMENT ON COLUMN signing_keys.version IS 'Version number for tracking key iterations';
+
+-- Add check constraints for data integrity
+ALTER TABLE security_settings ADD CONSTRAINT chk_value_type CHECK (value_type IN ('string', 'number', 'boolean', 'array', 'object'));
+
+ALTER TABLE security_incidents ADD CONSTRAINT chk_incident_severity CHECK (severity IN ('low', 'medium', 'high', 'critical'));
+
+ALTER TABLE signing_keys ADD CONSTRAINT chk_algorithm CHECK (algorithm IN ('ed25519', 'rsa', 'ecdsa', 'rsa-pss'));
+
+-- Grant permissions (adjust as needed for your setup)
+-- GRANT ALL PRIVILEGES ON TABLE security_settings TO redflag_user;
+-- GRANT ALL PRIVILEGES ON TABLE security_settings_audit TO redflag_user;
+-- GRANT ALL PRIVILEGES ON TABLE security_incidents TO redflag_user;
+-- GRANT ALL PRIVILEGES ON TABLE signing_keys TO redflag_user;
+-- GRANT USAGE ON SCHEMA public TO redflag_user;
\ No newline at end of file
diff --git a/aggregator-server/internal/database/queries/commands.go b/aggregator-server/internal/database/queries/commands.go
index 4764a3d..3a80ab0 100644
--- a/aggregator-server/internal/database/queries/commands.go
+++ b/aggregator-server/internal/database/queries/commands.go
@@ -22,9 +22,9 @@ func NewCommandQueries(db *sqlx.DB) *CommandQueries {
func (q *CommandQueries) CreateCommand(cmd *models.AgentCommand) error {
query := `
INSERT INTO agent_commands (
- id, agent_id, command_type, params, status, source, retried_from_id
+ id, agent_id, command_type, params, status, source, signature, retried_from_id
) VALUES (
- :id, :agent_id, :command_type, :params, :status, :source, :retried_from_id
+ :id, :agent_id, :command_type, :params, :status, :source, :signature, :retried_from_id
)
`
_, err := q.db.NamedExec(query, cmd)
@@ -200,6 +200,7 @@ func (q *CommandQueries) GetActiveCommands() ([]models.ActiveCommandInfo, error)
c.params,
c.status,
c.source,
+ c.signature,
c.created_at,
c.sent_at,
c.result,
@@ -262,6 +263,7 @@ func (q *CommandQueries) GetRecentCommands(limit int) ([]models.ActiveCommandInf
c.command_type,
c.status,
c.source,
+ c.signature,
c.created_at,
c.sent_at,
c.completed_at,
diff --git a/aggregator-server/internal/database/queries/registration_tokens.go b/aggregator-server/internal/database/queries/registration_tokens.go
index 0b8fc6c..704b62b 100644
--- a/aggregator-server/internal/database/queries/registration_tokens.go
+++ b/aggregator-server/internal/database/queries/registration_tokens.go
@@ -116,7 +116,7 @@ func (q *RegistrationTokenQueries) MarkTokenUsed(token string, agentID uuid.UUID
return nil
}
-// GetActiveRegistrationTokens returns all active tokens
+// GetActiveRegistrationTokens returns all active tokens that haven't expired
func (q *RegistrationTokenQueries) GetActiveRegistrationTokens() ([]RegistrationToken, error) {
var tokens []RegistrationToken
query := `
@@ -124,7 +124,7 @@ func (q *RegistrationTokenQueries) GetActiveRegistrationTokens() ([]Registration
revoked, revoked_at, revoked_reason, status, created_by, metadata,
max_seats, seats_used
FROM registration_tokens
- WHERE status = 'active'
+ WHERE status = 'active' AND expires_at > NOW()
ORDER BY created_at DESC
`
diff --git a/aggregator-server/internal/database/queries/users.go b/aggregator-server/internal/database/queries/users.go
deleted file mode 100644
index 31a3a3d..0000000
--- a/aggregator-server/internal/database/queries/users.go
+++ /dev/null
@@ -1,123 +0,0 @@
-package queries
-
-import (
- "time"
-
- "github.com/Fimeg/RedFlag/aggregator-server/internal/models"
- "github.com/google/uuid"
- "github.com/jmoiron/sqlx"
- "golang.org/x/crypto/bcrypt"
-)
-
-type UserQueries struct {
- db *sqlx.DB
-}
-
-func NewUserQueries(db *sqlx.DB) *UserQueries {
- return &UserQueries{db: db}
-}
-
-// CreateUser inserts a new user into the database with password hashing
-func (q *UserQueries) CreateUser(username, email, password, role string) (*models.User, error) {
- // Hash the password
- hashedPassword, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
- if err != nil {
- return nil, err
- }
-
- user := &models.User{
- ID: uuid.New(),
- Username: username,
- Email: email,
- PasswordHash: string(hashedPassword),
- Role: role,
- CreatedAt: time.Now().UTC(),
- }
-
- query := `
- INSERT INTO users (
- id, username, email, password_hash, role, created_at
- ) VALUES (
- :id, :username, :email, :password_hash, :role, :created_at
- )
- RETURNING *
- `
-
- rows, err := q.db.NamedQuery(query, user)
- if err != nil {
- return nil, err
- }
- defer rows.Close()
-
- if rows.Next() {
- if err := rows.StructScan(user); err != nil {
- return nil, err
- }
- return user, nil
- }
-
- return nil, nil
-}
-
-// GetUserByUsername retrieves a user by username
-func (q *UserQueries) GetUserByUsername(username string) (*models.User, error) {
- var user models.User
- query := `SELECT * FROM users WHERE username = $1`
- err := q.db.Get(&user, query, username)
- if err != nil {
- return nil, err
- }
- return &user, nil
-}
-
-// VerifyCredentials checks if the provided username and password are valid
-func (q *UserQueries) VerifyCredentials(username, password string) (*models.User, error) {
- user, err := q.GetUserByUsername(username)
- if err != nil {
- return nil, err
- }
-
- // Compare the provided password with the stored hash
- err = bcrypt.CompareHashAndPassword([]byte(user.PasswordHash), []byte(password))
- if err != nil {
- return nil, err // Invalid password
- }
-
- // Update last login time
- q.UpdateLastLogin(user.ID)
-
- // Don't return password hash
- user.PasswordHash = ""
- return user, nil
-}
-
-// UpdateLastLogin updates the user's last login timestamp
-func (q *UserQueries) UpdateLastLogin(id uuid.UUID) error {
- query := `UPDATE users SET last_login = $1 WHERE id = $2`
- _, err := q.db.Exec(query, time.Now().UTC(), id)
- return err
-}
-
-// GetUserByID retrieves a user by ID
-func (q *UserQueries) GetUserByID(id uuid.UUID) (*models.User, error) {
- var user models.User
- query := `SELECT id, username, email, role, created_at, last_login FROM users WHERE id = $1`
- err := q.db.Get(&user, query, id)
- if err != nil {
- return nil, err
- }
- return &user, nil
-}
-
-// EnsureAdminUser creates an admin user if one doesn't exist
-func (q *UserQueries) EnsureAdminUser(username, email, password string) error {
- // Check if admin user already exists
- existingUser, err := q.GetUserByUsername(username)
- if err == nil && existingUser != nil {
- return nil // Admin user already exists
- }
-
- // Create admin user
- _, err = q.CreateUser(username, email, password, "admin")
- return err
-}
\ No newline at end of file
diff --git a/aggregator-server/internal/logging/example_integration.go b/aggregator-server/internal/logging/example_integration.go
new file mode 100644
index 0000000..7e8c793
--- /dev/null
+++ b/aggregator-server/internal/logging/example_integration.go
@@ -0,0 +1,118 @@
+package logging
+
+// This file contains example code showing how to integrate the security logger
+// into various parts of the server application.
+
+import (
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/config"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/models"
+ "github.com/google/uuid"
+ "github.com/jmoiron/sqlx"
+)
+
+// Example of how to initialize the security logger in main.go
+func ExampleInitializeSecurityLogger(cfg *config.Config, db *sqlx.DB) (*SecurityLogger, error) {
+ // Convert config to security logger config
+ secConfig := SecurityLogConfig{
+ Enabled: cfg.SecurityLogging.Enabled,
+ Level: cfg.SecurityLogging.Level,
+ LogSuccesses: cfg.SecurityLogging.LogSuccesses,
+ FilePath: cfg.SecurityLogging.FilePath,
+ MaxSizeMB: cfg.SecurityLogging.MaxSizeMB,
+ MaxFiles: cfg.SecurityLogging.MaxFiles,
+ RetentionDays: cfg.SecurityLogging.RetentionDays,
+ LogToDatabase: cfg.SecurityLogging.LogToDatabase,
+ HashIPAddresses: cfg.SecurityLogging.HashIPAddresses,
+ }
+
+ // Create the security logger
+ securityLogger, err := NewSecurityLogger(secConfig, db)
+ if err != nil {
+ return nil, err
+ }
+
+ return securityLogger, nil
+}
+
+// Example of using the security logger in authentication handlers
+func ExampleAuthHandler(securityLogger *SecurityLogger, clientIP string) {
+ // Example: JWT validation failed
+ securityLogger.LogAuthJWTValidationFailure(
+ uuid.Nil, // Agent ID might not be known yet
+ "invalid.jwt.token",
+ "expired signature",
+ )
+
+ // Example: Unauthorized access attempt
+ securityLogger.LogUnauthorizedAccessAttempt(
+ clientIP,
+ "/api/v1/admin/users",
+ "insufficient privileges",
+ uuid.Nil,
+ )
+}
+
+// Example of using the security logger in command/verification handlers
+func ExampleCommandVerificationHandler(securityLogger *SecurityLogger, agentID, commandID uuid.UUID, signature string) {
+ // Simulate signature verification
+ signatureValid := false // In real code, this would be actual verification result
+
+ if !signatureValid {
+ securityLogger.LogCommandVerificationFailure(
+ agentID,
+ commandID,
+ "signature mismatch: expected X, got Y",
+ )
+ } else {
+ // Only log success if configured to do so
+ if securityLogger.config.LogSuccesses {
+ event := models.NewSecurityEvent(
+ "INFO",
+ models.SecurityEventTypes.CmdSignatureVerificationSuccess,
+ agentID,
+ "Command signature verification succeeded",
+ )
+ event.WithDetail("command_id", commandID.String())
+ securityLogger.Log(event)
+ }
+ }
+}
+
+// Example of using the security logger in update handlers
+func ExampleUpdateHandler(securityLogger *SecurityLogger, agentID uuid.UUID, updateData []byte, signature string) {
+ // Simulate update nonce validation
+ nonceValid := false // In real code, this would be actual validation
+
+ if !nonceValid {
+ securityLogger.LogNonceValidationFailure(
+ agentID,
+ "12345678-1234-1234-1234-123456789012",
+ "nonce not found in database",
+ )
+ }
+
+ // Simulate signature verification
+ signatureValid := false
+ if !signatureValid {
+ securityLogger.LogUpdateSignatureValidationFailure(
+ agentID,
+ "update-123",
+ "invalid signature format",
+ )
+ }
+}
+
+// Example of using the security logger on agent registration
+func ExampleAgentRegistrationHandler(securityLogger *SecurityLogger, clientIP string) {
+ securityLogger.LogAgentRegistrationFailed(
+ clientIP,
+ "invalid registration token",
+ )
+}
+
+// Example of checking if a private key is configured
+func ExampleCheckPrivateKey(securityLogger *SecurityLogger, cfg *config.Config) {
+ if cfg.SigningPrivateKey == "" {
+ securityLogger.LogPrivateKeyNotConfigured()
+ }
+}
\ No newline at end of file
diff --git a/aggregator-server/internal/logging/security_logger.go b/aggregator-server/internal/logging/security_logger.go
new file mode 100644
index 0000000..19852f8
--- /dev/null
+++ b/aggregator-server/internal/logging/security_logger.go
@@ -0,0 +1,363 @@
+package logging
+
+import (
+ "encoding/json"
+ "fmt"
+ "log"
+ "os"
+ "path/filepath"
+ "sync"
+ "time"
+
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/models"
+ "github.com/google/uuid"
+ "github.com/jmoiron/sqlx"
+ "gopkg.in/natefinch/lumberjack.v2"
+)
+
+// SecurityLogConfig holds configuration for security logging
+type SecurityLogConfig struct {
+ Enabled bool `yaml:"enabled" env:"REDFLAG_SECURITY_LOG_ENABLED" default:"true"`
+ Level string `yaml:"level" env:"REDFLAG_SECURITY_LOG_LEVEL" default:"warning"` // none, error, warn, info, debug
+ LogSuccesses bool `yaml:"log_successes" env:"REDFLAG_SECURITY_LOG_SUCCESSES" default:"false"`
+ FilePath string `yaml:"file_path" env:"REDFLAG_SECURITY_LOG_PATH" default:"/var/log/redflag/security.json"`
+ MaxSizeMB int `yaml:"max_size_mb" env:"REDFLAG_SECURITY_LOG_MAX_SIZE" default:"100"`
+ MaxFiles int `yaml:"max_files" env:"REDFLAG_SECURITY_LOG_MAX_FILES" default:"10"`
+ RetentionDays int `yaml:"retention_days" env:"REDFLAG_SECURITY_LOG_RETENTION" default:"90"`
+ LogToDatabase bool `yaml:"log_to_database" env:"REDFLAG_SECURITY_LOG_TO_DB" default:"true"`
+ HashIPAddresses bool `yaml:"hash_ip_addresses" env:"REDFLAG_SECURITY_LOG_HASH_IP" default:"true"`
+}
+
+// SecurityLogger handles structured security event logging
+type SecurityLogger struct {
+ config SecurityLogConfig
+ logger *log.Logger
+ db *sqlx.DB
+ lumberjack *lumberjack.Logger
+ mu sync.RWMutex
+ buffer chan *models.SecurityEvent
+ bufferSize int
+ stopChan chan struct{}
+ wg sync.WaitGroup
+}
+
+// NewSecurityLogger creates a new security logger instance
+func NewSecurityLogger(config SecurityLogConfig, db *sqlx.DB) (*SecurityLogger, error) {
+ if !config.Enabled || config.Level == "none" {
+ return &SecurityLogger{
+ config: config,
+ logger: log.New(os.Stdout, "[SECURITY] ", log.LstdFlags|log.LUTC),
+ }, nil
+ }
+
+ // Ensure log directory exists
+ logDir := filepath.Dir(config.FilePath)
+ if err := os.MkdirAll(logDir, 0755); err != nil {
+ return nil, fmt.Errorf("failed to create security log directory: %w", err)
+ }
+
+ // Setup rotating file writer
+ lumberjack := &lumberjack.Logger{
+ Filename: config.FilePath,
+ MaxSize: config.MaxSizeMB,
+ MaxBackups: config.MaxFiles,
+ MaxAge: config.RetentionDays,
+ Compress: true,
+ }
+
+ logger := &SecurityLogger{
+ config: config,
+ logger: log.New(lumberjack, "", 0), // No prefix, we'll add timestamps ourselves
+ db: db,
+ lumberjack: lumberjack,
+ buffer: make(chan *models.SecurityEvent, 1000),
+ bufferSize: 1000,
+ stopChan: make(chan struct{}),
+ }
+
+ // Start background processor
+ logger.wg.Add(1)
+ go logger.processEvents()
+
+ return logger, nil
+}
+
+// Log writes a security event
+func (sl *SecurityLogger) Log(event *models.SecurityEvent) error {
+ if !sl.config.Enabled || sl.config.Level == "none" {
+ return nil
+ }
+
+ // Skip successes unless configured to log them
+ if !sl.config.LogSuccesses && event.EventType == models.SecurityEventTypes.CmdSignatureVerificationSuccess {
+ return nil
+ }
+
+ // Filter by log level
+ if !sl.shouldLogLevel(event.Level) {
+ return nil
+ }
+
+ // Hash IP addresses if configured
+ if sl.config.HashIPAddresses && event.IPAddress != "" {
+ event.HashIPAddress()
+ }
+
+ // Try to send to buffer (non-blocking)
+ select {
+ case sl.buffer <- event:
+ default:
+ // Buffer full, log directly synchronously
+ return sl.writeEvent(event)
+ }
+
+ return nil
+}
+
+// LogCommandVerificationFailure logs a command signature verification failure
+func (sl *SecurityLogger) LogCommandVerificationFailure(agentID, commandID uuid.UUID, reason string) {
+ event := models.NewSecurityEvent("CRITICAL", models.SecurityEventTypes.CmdSignatureVerificationFailed, agentID, "Command signature verification failed")
+ event.WithDetail("command_id", commandID.String())
+ event.WithDetail("reason", reason)
+
+ _ = sl.Log(event)
+}
+
+// LogUpdateSignatureValidationFailure logs an update signature validation failure
+func (sl *SecurityLogger) LogUpdateSignatureValidationFailure(agentID uuid.UUID, updateID string, reason string) {
+ event := models.NewSecurityEvent("CRITICAL", models.SecurityEventTypes.UpdateSignatureVerificationFailed, agentID, "Update signature validation failed")
+ event.WithDetail("update_id", updateID)
+ event.WithDetail("reason", reason)
+
+ _ = sl.Log(event)
+}
+
+// LogCommandSigned logs successful command signing
+func (sl *SecurityLogger) LogCommandSigned(cmd *models.AgentCommand) {
+ event := models.NewSecurityEvent("INFO", models.SecurityEventTypes.CmdSigned, cmd.AgentID, "Command signed successfully")
+ event.WithDetail("command_id", cmd.ID.String())
+ event.WithDetail("command_type", cmd.CommandType)
+ event.WithDetail("signature_present", cmd.Signature != "")
+
+ _ = sl.Log(event)
+}
+
+// LogNonceValidationFailure logs a nonce validation failure
+func (sl *SecurityLogger) LogNonceValidationFailure(agentID uuid.UUID, nonce string, reason string) {
+ event := models.NewSecurityEvent("WARNING", models.SecurityEventTypes.UpdateNonceInvalid, agentID, "Update nonce validation failed")
+ event.WithDetail("nonce", nonce)
+ event.WithDetail("reason", reason)
+
+ _ = sl.Log(event)
+}
+
+// LogMachineIDMismatch logs a machine ID mismatch
+func (sl *SecurityLogger) LogMachineIDMismatch(agentID uuid.UUID, expected, actual string) {
+ event := models.NewSecurityEvent("WARNING", models.SecurityEventTypes.MachineIDMismatch, agentID, "Machine ID mismatch detected")
+ event.WithDetail("expected_machine_id", expected)
+ event.WithDetail("actual_machine_id", actual)
+
+ _ = sl.Log(event)
+}
+
+// LogAuthJWTValidationFailure logs a JWT validation failure
+func (sl *SecurityLogger) LogAuthJWTValidationFailure(agentID uuid.UUID, token string, reason string) {
+ event := models.NewSecurityEvent("WARNING", models.SecurityEventTypes.AuthJWTValidationFailed, agentID, "JWT authentication failed")
+ event.WithDetail("reason", reason)
+ if len(token) > 0 {
+ event.WithDetail("token_preview", token[:min(len(token), 20)]+"...")
+ }
+
+ _ = sl.Log(event)
+}
+
+// LogPrivateKeyNotConfigured logs when private key is not configured
+func (sl *SecurityLogger) LogPrivateKeyNotConfigured() {
+ event := models.NewSecurityEvent("CRITICAL", models.SecurityEventTypes.PrivateKeyNotConfigured, uuid.Nil, "Private signing key not configured")
+ event.WithDetail("component", "server")
+
+ _ = sl.Log(event)
+}
+
+// LogAgentRegistrationFailed logs an agent registration failure
+func (sl *SecurityLogger) LogAgentRegistrationFailed(ip string, reason string) {
+ event := models.NewSecurityEvent("WARNING", models.SecurityEventTypes.AgentRegistrationFailed, uuid.Nil, "Agent registration failed")
+ event.WithIPAddress(ip)
+ event.WithDetail("reason", reason)
+
+ _ = sl.Log(event)
+}
+
+// LogUnauthorizedAccessAttempt logs an unauthorized access attempt
+func (sl *SecurityLogger) LogUnauthorizedAccessAttempt(ip, endpoint, reason string, agentID uuid.UUID) {
+ event := models.NewSecurityEvent("WARNING", models.SecurityEventTypes.UnauthorizedAccessAttempt, agentID, "Unauthorized access attempt")
+ event.WithIPAddress(ip)
+ event.WithDetail("endpoint", endpoint)
+ event.WithDetail("reason", reason)
+
+ _ = sl.Log(event)
+}
+
+// processEvents processes events from the buffer in the background
+func (sl *SecurityLogger) processEvents() {
+ defer sl.wg.Done()
+
+ ticker := time.NewTicker(5 * time.Second)
+ defer ticker.Stop()
+
+ batch := make([]*models.SecurityEvent, 0, 100)
+
+ for {
+ select {
+ case event := <-sl.buffer:
+ batch = append(batch, event)
+ if len(batch) >= 100 {
+ sl.processBatch(batch)
+ batch = batch[:0]
+ }
+ case <-ticker.C:
+ if len(batch) > 0 {
+ sl.processBatch(batch)
+ batch = batch[:0]
+ }
+ case <-sl.stopChan:
+ // Process any remaining events
+ for len(sl.buffer) > 0 {
+ batch = append(batch, <-sl.buffer)
+ }
+ if len(batch) > 0 {
+ sl.processBatch(batch)
+ }
+ return
+ }
+ }
+}
+
+// processBatch processes a batch of events
+func (sl *SecurityLogger) processBatch(events []*models.SecurityEvent) {
+ for _, event := range events {
+ _ = sl.writeEvent(event)
+ }
+}
+
+// writeEvent writes an event to the configured outputs
+func (sl *SecurityLogger) writeEvent(event *models.SecurityEvent) error {
+ // Write to file
+ if err := sl.writeToFile(event); err != nil {
+ log.Printf("[ERROR] Failed to write security event to file: %v", err)
+ }
+
+ // Write to database if configured
+ if sl.config.LogToDatabase && sl.db != nil && event.ShouldLogToDatabase(sl.config.LogToDatabase) {
+ if err := sl.writeToDatabase(event); err != nil {
+ log.Printf("[ERROR] Failed to write security event to database: %v", err)
+ }
+ }
+
+ return nil
+}
+
+// writeToFile writes the event as JSON to the log file
+func (sl *SecurityLogger) writeToFile(event *models.SecurityEvent) error {
+ jsonData, err := json.Marshal(event)
+ if err != nil {
+ return fmt.Errorf("failed to marshal security event: %w", err)
+ }
+
+ sl.logger.Println(string(jsonData))
+ return nil
+}
+
+// writeToDatabase writes the event to the database
+func (sl *SecurityLogger) writeToDatabase(event *models.SecurityEvent) error {
+ // Create security_events table if not exists
+ if err := sl.ensureSecurityEventsTable(); err != nil {
+ return fmt.Errorf("failed to ensure security_events table: %w", err)
+ }
+
+ // Encode details and metadata as JSON
+ detailsJSON, _ := json.Marshal(event.Details)
+ metadataJSON, _ := json.Marshal(event.Metadata)
+
+ query := `
+ INSERT INTO security_events (timestamp, level, event_type, agent_id, message, trace_id, ip_address, details, metadata)
+ VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`
+
+ _, err := sl.db.Exec(query,
+ event.Timestamp,
+ event.Level,
+ event.EventType,
+ event.AgentID,
+ event.Message,
+ event.TraceID,
+ event.IPAddress,
+ detailsJSON,
+ metadataJSON,
+ )
+
+ return err
+}
+
+// ensureSecurityEventsTable creates the security_events table if it doesn't exist
+func (sl *SecurityLogger) ensureSecurityEventsTable() error {
+ query := `
+ CREATE TABLE IF NOT EXISTS security_events (
+ id SERIAL PRIMARY KEY,
+ timestamp TIMESTAMP WITH TIME ZONE NOT NULL,
+ level VARCHAR(20) NOT NULL,
+ event_type VARCHAR(100) NOT NULL,
+ agent_id UUID,
+ message TEXT NOT NULL,
+ trace_id VARCHAR(100),
+ ip_address VARCHAR(100),
+ details JSONB,
+ metadata JSONB,
+ INDEX idx_security_events_timestamp (timestamp),
+ INDEX idx_security_events_agent_id (agent_id),
+ INDEX idx_security_events_level (level),
+ INDEX idx_security_events_event_type (event_type)
+ )`
+
+ _, err := sl.db.Exec(query)
+ return err
+}
+
+// Close closes the security logger and flushes any pending events
+func (sl *SecurityLogger) Close() error {
+ if sl.lumberjack != nil {
+ close(sl.stopChan)
+ sl.wg.Wait()
+ if err := sl.lumberjack.Close(); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+// shouldLogLevel checks if the event should be logged based on the configured level
+func (sl *SecurityLogger) shouldLogLevel(eventLevel string) bool {
+ levels := map[string]int{
+ "NONE": 0,
+ "ERROR": 1,
+ "WARNING": 2,
+ "INFO": 3,
+ "DEBUG": 4,
+ }
+
+ configLevel := levels[sl.config.Level]
+ eventLvl, exists := levels[eventLevel]
+ if !exists {
+ eventLvl = 2 // Default to WARNING
+ }
+
+ return eventLvl <= configLevel
+}
+
+// min returns the minimum of two integers
+func min(a, b int) int {
+ if a < b {
+ return a
+ }
+ return b
+}
+
diff --git a/aggregator-server/internal/models/agent.go b/aggregator-server/internal/models/agent.go
index 55d632a..90b67a2 100644
--- a/aggregator-server/internal/models/agent.go
+++ b/aggregator-server/internal/models/agent.go
@@ -102,6 +102,7 @@ type AgentRegistrationResponse struct {
type TokenRenewalRequest struct {
AgentID uuid.UUID `json:"agent_id" binding:"required"`
RefreshToken string `json:"refresh_token" binding:"required"`
+ AgentVersion string `json:"agent_version,omitempty"` // Optional: agent's current version for upgrade tracking
}
// TokenRenewalResponse is returned after successful token renewal
diff --git a/aggregator-server/internal/models/command.go b/aggregator-server/internal/models/command.go
index 03805ac..f1c67bc 100644
--- a/aggregator-server/internal/models/command.go
+++ b/aggregator-server/internal/models/command.go
@@ -14,6 +14,7 @@ type AgentCommand struct {
Params JSONB `json:"params" db:"params"`
Status string `json:"status" db:"status"`
Source string `json:"source" db:"source"`
+ Signature string `json:"signature,omitempty" db:"signature"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
SentAt *time.Time `json:"sent_at,omitempty" db:"sent_at"`
CompletedAt *time.Time `json:"completed_at,omitempty" db:"completed_at"`
@@ -36,9 +37,10 @@ type RapidPollingConfig struct {
// CommandItem represents a command in the response
type CommandItem struct {
- ID string `json:"id"`
- Type string `json:"type"`
- Params JSONB `json:"params"`
+ ID string `json:"id"`
+ Type string `json:"type"`
+ Params JSONB `json:"params"`
+ Signature string `json:"signature,omitempty"`
}
// Command types
@@ -80,6 +82,7 @@ type ActiveCommandInfo struct {
Params JSONB `json:"params" db:"params"`
Status string `json:"status" db:"status"`
Source string `json:"source" db:"source"`
+ Signature string `json:"signature,omitempty" db:"signature"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
SentAt *time.Time `json:"sent_at,omitempty" db:"sent_at"`
CompletedAt *time.Time `json:"completed_at,omitempty" db:"completed_at"`
diff --git a/aggregator-server/internal/models/security_event.go b/aggregator-server/internal/models/security_event.go
new file mode 100644
index 0000000..9824ca7
--- /dev/null
+++ b/aggregator-server/internal/models/security_event.go
@@ -0,0 +1,111 @@
+package models
+
+import (
+ "crypto/sha256"
+ "fmt"
+ "time"
+
+ "github.com/google/uuid"
+)
+
+// SecurityEvent represents a security-related event that occurred
+type SecurityEvent struct {
+ Timestamp time.Time `json:"timestamp" db:"timestamp"`
+ Level string `json:"level" db:"level"` // CRITICAL, WARNING, INFO, DEBUG
+ EventType string `json:"event_type" db:"event_type"`
+ AgentID uuid.UUID `json:"agent_id,omitempty" db:"agent_id"`
+ Message string `json:"message" db:"message"`
+ TraceID string `json:"trace_id,omitempty" db:"trace_id"`
+ IPAddress string `json:"ip_address,omitempty" db:"ip_address"`
+ Details map[string]interface{} `json:"details,omitempty" db:"details"` // JSON encoded
+ Metadata map[string]interface{} `json:"metadata,omitempty" db:"metadata"` // JSON encoded
+}
+
+// SecurityEventTypes defines all possible security event types
+var SecurityEventTypes = struct {
+ CmdSigned string
+ CmdSignatureVerificationFailed string
+ CmdSignatureVerificationSuccess string
+ UpdateNonceInvalid string
+ UpdateSignatureVerificationFailed string
+ MachineIDMismatch string
+ AuthJWTValidationFailed string
+ PrivateKeyNotConfigured string
+ AgentRegistrationFailed string
+ UnauthorizedAccessAttempt string
+ ConfigTamperingDetected string
+ AnomalousBehavior string
+}{
+ CmdSigned: "CMD_SIGNED",
+ CmdSignatureVerificationFailed: "CMD_SIGNATURE_VERIFICATION_FAILED",
+ CmdSignatureVerificationSuccess: "CMD_SIGNATURE_VERIFICATION_SUCCESS",
+ UpdateNonceInvalid: "UPDATE_NONCE_INVALID",
+ UpdateSignatureVerificationFailed: "UPDATE_SIGNATURE_VERIFICATION_FAILED",
+ MachineIDMismatch: "MACHINE_ID_MISMATCH",
+ AuthJWTValidationFailed: "AUTH_JWT_VALIDATION_FAILED",
+ PrivateKeyNotConfigured: "PRIVATE_KEY_NOT_CONFIGURED",
+ AgentRegistrationFailed: "AGENT_REGISTRATION_FAILED",
+ UnauthorizedAccessAttempt: "UNAUTHORIZED_ACCESS_ATTEMPT",
+ ConfigTamperingDetected: "CONFIG_TAMPERING_DETECTED",
+ AnomalousBehavior: "ANOMALOUS_BEHAVIOR",
+}
+
+// IsCritical returns true if the event is of critical severity
+func (e *SecurityEvent) IsCritical() bool {
+ return e.Level == "CRITICAL"
+}
+
+// IsWarning returns true if the event is a warning
+func (e *SecurityEvent) IsWarning() bool {
+ return e.Level == "WARNING"
+}
+
+// ShouldLogToDatabase determines if this event should be stored in the database
+func (e *SecurityEvent) ShouldLogToDatabase(logToDatabase bool) bool {
+ return logToDatabase && (e.IsCritical() || e.IsWarning())
+}
+
+// HashIPAddress hashes the IP address for privacy
+func (e *SecurityEvent) HashIPAddress() {
+ if e.IPAddress != "" {
+ hash := sha256.Sum256([]byte(e.IPAddress))
+ e.IPAddress = fmt.Sprintf("hashed:%x", hash[:8]) // Store first 8 bytes of hash
+ }
+}
+
+// NewSecurityEvent creates a new security event with current timestamp
+func NewSecurityEvent(level, eventType string, agentID uuid.UUID, message string) *SecurityEvent {
+ return &SecurityEvent{
+ Timestamp: time.Now().UTC(),
+ Level: level,
+ EventType: eventType,
+ AgentID: agentID,
+ Message: message,
+ Details: make(map[string]interface{}),
+ Metadata: make(map[string]interface{}),
+ }
+}
+
+// WithTrace adds a trace ID to the event
+func (e *SecurityEvent) WithTrace(traceID string) *SecurityEvent {
+ e.TraceID = traceID
+ return e
+}
+
+// WithIPAddress adds an IP address to the event
+func (e *SecurityEvent) WithIPAddress(ip string) *SecurityEvent {
+ e.IPAddress = ip
+ return e
+}
+
+// WithDetail adds a key-value detail to the event
+func (e *SecurityEvent) WithDetail(key string, value interface{}) *SecurityEvent {
+ e.Details[key] = value
+ return e
+}
+
+// WithMetadata adds a key-value metadata to the event
+func (e *SecurityEvent) WithMetadata(key string, value interface{}) *SecurityEvent {
+ e.Metadata[key] = value
+ return e
+}
\ No newline at end of file
diff --git a/aggregator-server/internal/models/security_settings.go b/aggregator-server/internal/models/security_settings.go
new file mode 100644
index 0000000..72f96c2
--- /dev/null
+++ b/aggregator-server/internal/models/security_settings.go
@@ -0,0 +1,32 @@
+package models
+
+import (
+ "time"
+
+ "github.com/google/uuid"
+)
+
+// SecuritySetting represents a user-configurable security setting
+type SecuritySetting struct {
+ ID uuid.UUID `json:"id" db:"id"`
+ Category string `json:"category" db:"category"`
+ Key string `json:"key" db:"key"`
+ Value string `json:"value" db:"value"`
+ IsEncrypted bool `json:"is_encrypted" db:"is_encrypted"`
+ CreatedAt time.Time `json:"created_at" db:"created_at"`
+ UpdatedAt *time.Time `json:"updated_at" db:"updated_at"`
+ CreatedBy *uuid.UUID `json:"created_by" db:"created_by"`
+ UpdatedBy *uuid.UUID `json:"updated_by" db:"updated_by"`
+}
+
+// SecuritySettingAudit represents an audit log entry for security setting changes
+type SecuritySettingAudit struct {
+ ID uuid.UUID `json:"id" db:"id"`
+ SettingID uuid.UUID `json:"setting_id" db:"setting_id"`
+ UserID uuid.UUID `json:"user_id" db:"user_id"`
+ Action string `json:"action" db:"action"` // create, update, delete
+ OldValue *string `json:"old_value" db:"old_value"`
+ NewValue *string `json:"new_value" db:"new_value"`
+ Reason string `json:"reason" db:"reason"`
+ CreatedAt time.Time `json:"created_at" db:"created_at"`
+}
\ No newline at end of file
diff --git a/aggregator-server/internal/models/system_event.go b/aggregator-server/internal/models/system_event.go
new file mode 100644
index 0000000..e2ebfb6
--- /dev/null
+++ b/aggregator-server/internal/models/system_event.go
@@ -0,0 +1,79 @@
+package models
+
+import (
+ "time"
+
+ "github.com/google/uuid"
+)
+
+// SystemEvent represents a unified event log entry for all system events
+// This implements the unified event logging system from docs/ERROR_FLOW_AUDIT.md
+type SystemEvent struct {
+ ID uuid.UUID `json:"id" db:"id"`
+ AgentID *uuid.UUID `json:"agent_id,omitempty" db:"agent_id"` // Pointer to allow NULL for server events
+ EventType string `json:"event_type" db:"event_type"` // e.g., 'agent_update', 'agent_startup', 'server_build'
+ EventSubtype string `json:"event_subtype" db:"event_subtype"` // e.g., 'success', 'failed', 'info', 'warning'
+ Severity string `json:"severity" db:"severity"` // 'info', 'warning', 'error', 'critical'
+ Component string `json:"component" db:"component"` // 'agent', 'server', 'build', 'download', 'config', etc.
+ Message string `json:"message" db:"message"`
+ Metadata map[string]interface{} `json:"metadata,omitempty" db:"metadata"` // JSONB for structured data
+ CreatedAt time.Time `json:"created_at" db:"created_at"`
+}
+
+// Event type constants
+const (
+ EventTypeAgentStartup = "agent_startup"
+ EventTypeAgentRegistration = "agent_registration"
+ EventTypeAgentCheckIn = "agent_checkin"
+ EventTypeAgentScan = "agent_scan"
+ EventTypeAgentUpdate = "agent_update"
+ EventTypeAgentConfig = "agent_config"
+ EventTypeAgentMigration = "agent_migration"
+ EventTypeAgentShutdown = "agent_shutdown"
+ EventTypeServerBuild = "server_build"
+ EventTypeServerDownload = "server_download"
+ EventTypeServerConfig = "server_config"
+ EventTypeServerAuth = "server_auth"
+ EventTypeDownload = "download"
+ EventTypeMigration = "migration"
+ EventTypeError = "error"
+)
+
+// Event subtype constants
+const (
+ SubtypeSuccess = "success"
+ SubtypeFailed = "failed"
+ SubtypeInfo = "info"
+ SubtypeWarning = "warning"
+ SubtypeCritical = "critical"
+ SubtypeDownloadFailed = "download_failed"
+ SubtypeValidationFailed = "validation_failed"
+ SubtypeConfigCorrupted = "config_corrupted"
+ SubtypeMigrationNeeded = "migration_needed"
+ SubtypePanicRecovered = "panic_recovered"
+ SubtypeTokenExpired = "token_expired"
+ SubtypeNetworkTimeout = "network_timeout"
+ SubtypePermissionDenied = "permission_denied"
+ SubtypeServiceUnavailable = "service_unavailable"
+)
+
+// Severity constants
+const (
+ SeverityInfo = "info"
+ SeverityWarning = "warning"
+ SeverityError = "error"
+ SeverityCritical = "critical"
+)
+
+// Component constants
+const (
+ ComponentAgent = "agent"
+ ComponentServer = "server"
+ ComponentBuild = "build"
+ ComponentDownload = "download"
+ ComponentConfig = "config"
+ ComponentDatabase = "database"
+ ComponentNetwork = "network"
+ ComponentSecurity = "security"
+ ComponentMigration = "migration"
+)
\ No newline at end of file
diff --git a/aggregator-server/internal/models/user.go b/aggregator-server/internal/models/user.go
index 4cfa899..363c325 100644
--- a/aggregator-server/internal/models/user.go
+++ b/aggregator-server/internal/models/user.go
@@ -11,7 +11,6 @@ type User struct {
Username string `json:"username" db:"username"`
Email string `json:"email" db:"email"`
PasswordHash string `json:"-" db:"password_hash"` // Don't include in JSON
- Role string `json:"role" db:"role"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
LastLogin *time.Time `json:"last_login" db:"last_login"`
}
diff --git a/aggregator-server/internal/services/agent_builder.go b/aggregator-server/internal/services/agent_builder.go
index c5991ce..31d13ce 100644
--- a/aggregator-server/internal/services/agent_builder.go
+++ b/aggregator-server/internal/services/agent_builder.go
@@ -76,7 +76,7 @@ func (ab *AgentBuilder) generateConfigJSON(config *AgentConfiguration) (string,
// CRITICAL: Add both version fields explicitly
// These MUST be present or middleware will block the agent
completeConfig["version"] = config.ConfigVersion // Config schema version (e.g., "5")
- completeConfig["agent_version"] = config.AgentVersion // Agent binary version (e.g., "0.1.23.5")
+ completeConfig["agent_version"] = config.AgentVersion // Agent binary version (e.g., "0.1.23.6")
// Add agent metadata
completeConfig["agent_id"] = config.AgentID
diff --git a/aggregator-server/internal/services/agent_lifecycle.go b/aggregator-server/internal/services/agent_lifecycle.go
index 6319ccb..82959f7 100644
--- a/aggregator-server/internal/services/agent_lifecycle.go
+++ b/aggregator-server/internal/services/agent_lifecycle.go
@@ -226,10 +226,16 @@ func (s *AgentLifecycleService) buildResponse(
cfg *AgentConfig,
artifacts *BuildArtifacts,
) *AgentSetupResponse {
+ // Default to amd64 if architecture not specified
+ arch := cfg.Architecture
+ if arch == "" {
+ arch = "amd64"
+ }
+
return &AgentSetupResponse{
AgentID: cfg.AgentID,
ConfigURL: fmt.Sprintf("/api/v1/config/%s", cfg.AgentID),
- BinaryURL: fmt.Sprintf("/api/v1/downloads/%s?version=%s", cfg.Platform, cfg.Version),
+ BinaryURL: fmt.Sprintf("/api/v1/downloads/%s-%s?version=%s", cfg.Platform, arch, cfg.Version),
Signature: artifacts.Signature,
Version: cfg.Version,
Platform: cfg.Platform,
diff --git a/aggregator-server/internal/services/build_orchestrator.go b/aggregator-server/internal/services/build_orchestrator.go
new file mode 100644
index 0000000..550b0d6
--- /dev/null
+++ b/aggregator-server/internal/services/build_orchestrator.go
@@ -0,0 +1,138 @@
+package services
+
+import (
+ "fmt"
+ "log"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/models"
+ "github.com/google/uuid"
+)
+
+// BuildOrchestratorService handles building and signing agent binaries
+type BuildOrchestratorService struct {
+ signingService *SigningService
+ packageQueries *queries.PackageQueries
+ agentDir string // Directory containing pre-built binaries
+}
+
+// NewBuildOrchestratorService creates a new build orchestrator service
+func NewBuildOrchestratorService(signingService *SigningService, packageQueries *queries.PackageQueries, agentDir string) *BuildOrchestratorService {
+ return &BuildOrchestratorService{
+ signingService: signingService,
+ packageQueries: packageQueries,
+ agentDir: agentDir,
+ }
+}
+
+// BuildAndSignAgent builds (or retrieves) and signs an agent binary
+func (s *BuildOrchestratorService) BuildAndSignAgent(version, platform, architecture string) (*models.AgentUpdatePackage, error) {
+ // Determine binary name
+ binaryName := "redflag-agent"
+ if strings.HasPrefix(platform, "windows") {
+ binaryName += ".exe"
+ }
+
+ // Path to pre-built binary
+ binaryPath := filepath.Join(s.agentDir, "binaries", platform, binaryName)
+
+ // Check if binary exists
+ if _, err := os.Stat(binaryPath); os.IsNotExist(err) {
+ return nil, fmt.Errorf("binary not found for platform %s: %w", platform, err)
+ }
+
+ // Sign the binary if signing is enabled
+ if s.signingService.IsEnabled() {
+ signedPackage, err := s.signingService.SignFile(binaryPath)
+ if err != nil {
+ return nil, fmt.Errorf("failed to sign agent binary: %w", err)
+ }
+
+ // Set additional fields
+ signedPackage.Version = version
+ signedPackage.Platform = platform
+ signedPackage.Architecture = architecture
+
+ // Store signed package in database
+ err = s.packageQueries.StoreSignedPackage(signedPackage)
+ if err != nil {
+ return nil, fmt.Errorf("failed to store signed package: %w", err)
+ }
+
+ log.Printf("Successfully signed and stored agent binary: %s (%s/%s)", signedPackage.ID, platform, architecture)
+ return signedPackage, nil
+ } else {
+ log.Printf("Signing disabled, creating unsigned package entry")
+ // Create unsigned package entry for backward compatibility
+ unsignedPackage := &models.AgentUpdatePackage{
+ ID: uuid.New(),
+ Version: version,
+ Platform: platform,
+ Architecture: architecture,
+ BinaryPath: binaryPath,
+ Signature: "",
+ Checksum: "", // Would need to calculate if needed
+ FileSize: 0, // Would need to stat if needed
+ CreatedBy: "build-orchestrator",
+ IsActive: true,
+ }
+
+ // Get file info
+ if info, err := os.Stat(binaryPath); err == nil {
+ unsignedPackage.FileSize = info.Size()
+ }
+
+ // Store unsigned package
+ err := s.packageQueries.StoreSignedPackage(unsignedPackage)
+ if err != nil {
+ return nil, fmt.Errorf("failed to store unsigned package: %w", err)
+ }
+
+ return unsignedPackage, nil
+ }
+}
+
+// SignExistingBinary signs an existing binary file
+func (s *BuildOrchestratorService) SignExistingBinary(binaryPath, version, platform, architecture string) (*models.AgentUpdatePackage, error) {
+ // Check if file exists
+ if _, err := os.Stat(binaryPath); os.IsNotExist(err) {
+ return nil, fmt.Errorf("binary not found: %s", binaryPath)
+ }
+
+ // Sign the binary if signing is enabled
+ if !s.signingService.IsEnabled() {
+ return nil, fmt.Errorf("signing service is disabled")
+ }
+
+ signedPackage, err := s.signingService.SignFile(binaryPath)
+ if err != nil {
+ return nil, fmt.Errorf("failed to sign agent binary: %w", err)
+ }
+
+ // Set additional fields
+ signedPackage.Version = version
+ signedPackage.Platform = platform
+ signedPackage.Architecture = architecture
+
+ // Store signed package in database
+ err = s.packageQueries.StoreSignedPackage(signedPackage)
+ if err != nil {
+ return nil, fmt.Errorf("failed to store signed package: %w", err)
+ }
+
+ log.Printf("Successfully signed and stored agent binary: %s (%s/%s)", signedPackage.ID, platform, architecture)
+ return signedPackage, nil
+}
+
+// GetSignedPackage retrieves a signed package by version and platform
+func (s *BuildOrchestratorService) GetSignedPackage(version, platform, architecture string) (*models.AgentUpdatePackage, error) {
+ return s.packageQueries.GetSignedPackage(version, platform, architecture)
+}
+
+// ListSignedPackages lists all signed packages (with optional filters)
+func (s *BuildOrchestratorService) ListSignedPackages(version, platform string, limit, offset int) ([]models.AgentUpdatePackage, error) {
+ return s.packageQueries.ListUpdatePackages(version, platform, limit, offset)
+}
\ No newline at end of file
diff --git a/aggregator-server/internal/services/build_types.go b/aggregator-server/internal/services/build_types.go
index 34dd83f..206e0c0 100644
--- a/aggregator-server/internal/services/build_types.go
+++ b/aggregator-server/internal/services/build_types.go
@@ -11,7 +11,7 @@ import (
"strings"
"time"
- "github.com/Fimeg/RedFlag/aggregator/pkg/common"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/common"
)
// NewBuildRequest represents a request for a new agent build
diff --git a/aggregator-server/internal/services/config_builder.go b/aggregator-server/internal/services/config_builder.go
index e673d12..287780d 100644
--- a/aggregator-server/internal/services/config_builder.go
+++ b/aggregator-server/internal/services/config_builder.go
@@ -8,7 +8,9 @@ import (
"net/http"
"time"
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
"github.com/google/uuid"
+ "github.com/jmoiron/sqlx"
)
// AgentTemplate defines a template for different agent types
@@ -37,17 +39,16 @@ type PublicKeyResponse struct {
}
// ConfigBuilder handles dynamic agent configuration generation
-// ConfigBuilder builds agent configurations
-// Deprecated: Use services.ConfigService instead
type ConfigBuilder struct {
- serverURL string
- templates map[string]AgentTemplate
- httpClient *http.Client
- publicKeyCache map[string]string
+ serverURL string
+ templates map[string]AgentTemplate
+ httpClient *http.Client
+ publicKeyCache map[string]string
+ scannerConfigQ *queries.ScannerConfigQueries
}
// NewConfigBuilder creates a new configuration builder
-func NewConfigBuilder(serverURL string) *ConfigBuilder {
+func NewConfigBuilder(serverURL string, db *sqlx.DB) *ConfigBuilder {
return &ConfigBuilder{
serverURL: serverURL,
templates: getAgentTemplates(),
@@ -55,6 +56,7 @@ func NewConfigBuilder(serverURL string) *ConfigBuilder {
Timeout: 30 * time.Second,
},
publicKeyCache: make(map[string]string),
+ scannerConfigQ: queries.NewScannerConfigQueries(db),
}
}
@@ -66,6 +68,7 @@ type AgentSetupRequest struct {
Organization string `json:"organization" binding:"required"`
CustomSettings map[string]interface{} `json:"custom_settings,omitempty"`
DeploymentID string `json:"deployment_id,omitempty"`
+ AgentID string `json:"agent_id,omitempty"` // Optional: existing agent ID for upgrades
}
// BuildAgentConfig builds a complete agent configuration
@@ -75,8 +78,8 @@ func (cb *ConfigBuilder) BuildAgentConfig(req AgentSetupRequest) (*AgentConfigur
return nil, err
}
- // Generate agent ID
- agentID := uuid.New().String()
+ // Determine agent ID - use existing if provided and valid, otherwise generate new
+ agentID := cb.determineAgentID(req.AgentID)
// Fetch server public key
serverPublicKey, err := cb.fetchServerPublicKey(req.ServerURL)
@@ -99,6 +102,9 @@ func (cb *ConfigBuilder) BuildAgentConfig(req AgentSetupRequest) (*AgentConfigur
// Build base configuration
config := cb.buildFromTemplate(template, req.CustomSettings)
+ // Override scanner timeouts from database (user-configurable)
+ cb.overrideScannerTimeoutsFromDB(config)
+
// Inject deployment-specific values
cb.injectDeploymentValues(config, req, agentID, registrationToken, serverPublicKey)
@@ -153,7 +159,7 @@ func (cb *ConfigBuilder) BuildAgentConfig(req AgentSetupRequest) (*AgentConfigur
Organization: req.Organization,
Platform: platform,
ConfigVersion: "5", // Config schema version
- AgentVersion: "0.1.23.4", // Agent binary version
+ AgentVersion: "0.1.23.6", // Agent binary version
BuildTime: time.Now(),
SecretsCreated: secretsCreated,
SecretsPath: secretsPath,
@@ -171,7 +177,7 @@ type AgentConfiguration struct {
Organization string `json:"organization"`
Platform string `json:"platform"`
ConfigVersion string `json:"config_version"` // Config schema version (e.g., "5")
- AgentVersion string `json:"agent_version"` // Agent binary version (e.g., "0.1.23.5")
+ AgentVersion string `json:"agent_version"` // Agent binary version (e.g., "0.1.23.6")
BuildTime time.Time `json:"build_time"`
SecretsCreated bool `json:"secrets_created"`
SecretsPath string `json:"secrets_path,omitempty"`
@@ -271,7 +277,7 @@ func (cb *ConfigBuilder) buildFromTemplate(template AgentTemplate, customSetting
// injectDeploymentValues injects deployment-specific values into configuration
func (cb *ConfigBuilder) injectDeploymentValues(config map[string]interface{}, req AgentSetupRequest, agentID, registrationToken, serverPublicKey string) {
config["version"] = "5" // Config schema version (for migration system)
- config["agent_version"] = "0.1.23.5" // Agent binary version (MUST match the binary being served)
+ config["agent_version"] = "0.1.23.6" // Agent binary version (MUST match the binary being served)
config["server_url"] = req.ServerURL
config["agent_id"] = agentID
config["registration_token"] = registrationToken
@@ -285,6 +291,18 @@ func (cb *ConfigBuilder) injectDeploymentValues(config map[string]interface{}, r
}
}
+// determineAgentID checks if an existing agent ID is provided and valid, otherwise generates new
+func (cb *ConfigBuilder) determineAgentID(providedAgentID string) string {
+ if providedAgentID != "" {
+ // Validate it's a proper UUID
+ if _, err := uuid.Parse(providedAgentID); err == nil {
+ return providedAgentID
+ }
+ }
+ // Generate new UUID if none provided or invalid
+ return uuid.New().String()
+}
+
// applyEnvironmentDefaults applies environment-specific configuration defaults
func (cb *ConfigBuilder) applyEnvironmentDefaults(config map[string]interface{}, environment string) {
environmentDefaults := map[string]interface{}{
@@ -493,6 +511,35 @@ func (cb *ConfigBuilder) validateConstraint(field string, value interface{}, con
}
// getAgentTemplates returns the available agent templates
+// overrideScannerTimeoutsFromDB overrides scanner timeouts with values from database
+// This allows users to configure scanner timeouts via the web UI
+func (cb *ConfigBuilder) overrideScannerTimeoutsFromDB(config map[string]interface{}) {
+ if cb.scannerConfigQ == nil {
+ // No database connection, use defaults
+ return
+ }
+
+ // Get subsystems section
+ subsystems, exists := config["subsystems"].(map[string]interface{})
+ if !exists {
+ return
+ }
+
+ // List of scanners that can have configurable timeouts
+ scannerNames := []string{"apt", "dnf", "docker", "windows", "winget", "system", "storage", "updates"}
+
+ for _, scannerName := range scannerNames {
+ scannerConfig, exists := subsystems[scannerName].(map[string]interface{})
+ if !exists {
+ continue
+ }
+
+ // Get timeout from database
+ timeout := cb.scannerConfigQ.GetScannerTimeoutWithDefault(scannerName, 30*time.Minute)
+ scannerConfig["timeout"] = int(timeout.Nanoseconds())
+ }
+}
+
func getAgentTemplates() map[string]AgentTemplate {
return map[string]AgentTemplate{
"linux-server": {
@@ -532,7 +579,7 @@ func getAgentTemplates() map[string]AgentTemplate {
},
"dnf": map[string]interface{}{
"enabled": true,
- "timeout": 45000000000,
+ "timeout": 1800000000000, // 30 minutes - configurable via server settings
"circuit_breaker": map[string]interface{}{
"enabled": true,
"failure_threshold": 3,
@@ -726,4 +773,4 @@ func getAgentTemplates() map[string]AgentTemplate {
},
},
}
-}
\ No newline at end of file
+}
diff --git a/aggregator-server/internal/services/config_service.go b/aggregator-server/internal/services/config_service.go
index eb104c9..6bdf45d 100644
--- a/aggregator-server/internal/services/config_service.go
+++ b/aggregator-server/internal/services/config_service.go
@@ -32,6 +32,11 @@ func NewConfigService(db *sqlx.DB, cfg *config.Config, logger *log.Logger) *Conf
}
}
+// getDB returns the database connection (for access to refresh token queries)
+func (s *ConfigService) getDB() *sqlx.DB {
+ return s.db
+}
+
// AgentConfigData represents agent configuration structure
type AgentConfigData struct {
AgentID string `json:"agent_id"`
@@ -129,18 +134,23 @@ func (s *ConfigService) LoadExistingConfig(agentID string) ([]byte, error) {
return nil, fmt.Errorf("agent not found: %w", err)
}
- // Generate new config based on agent data
+ // For existing registered agents, generate proper config with auth tokens
+ s.logger.Printf("[DEBUG] Generating config for existing agent %s", agentID)
+ machineID := ""
+ if agent.MachineID != nil {
+ machineID = *agent.MachineID
+ }
+
agentCfg := &AgentConfig{
AgentID: agentID,
Version: agent.CurrentVersion,
Platform: agent.OSType,
Architecture: agent.OSArchitecture,
- MachineID: "",
- AgentType: "", // Could be stored in Metadata
+ MachineID: machineID,
+ AgentType: "", // Could be stored in metadata
Hostname: agent.Hostname,
}
- // Use GenerateNewConfig to create config
return s.GenerateNewConfig(agentCfg)
}
diff --git a/aggregator-server/internal/services/docker_secrets.go b/aggregator-server/internal/services/docker_secrets.go
new file mode 100644
index 0000000..84a9c3a
--- /dev/null
+++ b/aggregator-server/internal/services/docker_secrets.go
@@ -0,0 +1,116 @@
+package services
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/docker/docker/api/types"
+ "github.com/docker/docker/api/types/swarm"
+ "github.com/docker/docker/client"
+)
+
+// DockerSecretsService manages Docker secrets via Docker API
+type DockerSecretsService struct {
+ cli *client.Client
+}
+
+// NewDockerSecretsService creates a new Docker secrets service
+func NewDockerSecretsService() (*DockerSecretsService, error) {
+ cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
+ if err != nil {
+ return nil, fmt.Errorf("failed to create Docker client: %w", err)
+ }
+
+ // Test connection
+ ctx := context.Background()
+ if _, err := cli.Ping(ctx); err != nil {
+ return nil, fmt.Errorf("failed to connect to Docker daemon: %w", err)
+ }
+
+ return &DockerSecretsService{cli: cli}, nil
+}
+
+// CreateSecret creates a new Docker secret
+func (s *DockerSecretsService) CreateSecret(name, value string) error {
+ ctx := context.Background()
+
+ // Check if secret already exists
+ secrets, err := s.cli.SecretList(ctx, types.SecretListOptions{})
+ if err != nil {
+ return fmt.Errorf("failed to list secrets: %w", err)
+ }
+
+ for _, secret := range secrets {
+ if secret.Spec.Name == name {
+ return fmt.Errorf("secret %s already exists", name)
+ }
+ }
+
+ // Create the secret
+ secretSpec := swarm.SecretSpec{
+ Annotations: swarm.Annotations{
+ Name: name,
+ Labels: map[string]string{
+ "created-by": "redflag-setup",
+ "created-at": fmt.Sprintf("%d", 0), // Use current timestamp in real implementation
+ },
+ },
+ Data: []byte(value),
+ }
+
+ if _, err := s.cli.SecretCreate(ctx, secretSpec); err != nil {
+ return fmt.Errorf("failed to create secret %s: %w", name, err)
+ }
+
+ return nil
+}
+
+// DeleteSecret deletes a Docker secret
+func (s *DockerSecretsService) DeleteSecret(name string) error {
+ ctx := context.Background()
+
+ // Find the secret
+ secrets, err := s.cli.SecretList(ctx, types.SecretListOptions{})
+ if err != nil {
+ return fmt.Errorf("failed to list secrets: %w", err)
+ }
+
+ var secretID string
+ for _, secret := range secrets {
+ if secret.Spec.Name == name {
+ secretID = secret.ID
+ break
+ }
+ }
+
+ if secretID == "" {
+ return fmt.Errorf("secret %s not found", name)
+ }
+
+ if err := s.cli.SecretRemove(ctx, secretID); err != nil {
+ return fmt.Errorf("failed to remove secret %s: %w", name, err)
+ }
+
+ return nil
+}
+
+// Close closes the Docker client
+func (s *DockerSecretsService) Close() error {
+ if s.cli != nil {
+ return s.cli.Close()
+ }
+ return nil
+}
+
+// IsDockerAvailable checks if Docker API is accessible
+func IsDockerAvailable() bool {
+ cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
+ if err != nil {
+ return false
+ }
+ defer cli.Close()
+
+ ctx := context.Background()
+ _, err = cli.Ping(ctx)
+ return err == nil
+}
diff --git a/aggregator-server/internal/services/install_template_service.go b/aggregator-server/internal/services/install_template_service.go
index 403bdd3..af52a27 100644
--- a/aggregator-server/internal/services/install_template_service.go
+++ b/aggregator-server/internal/services/install_template_service.go
@@ -33,6 +33,10 @@ func (s *InstallTemplateService) RenderInstallScript(agent *models.Agent, binary
Platform string
Architecture string
Version string
+ AgentUser string
+ AgentHome string
+ ConfigDir string
+ LogDir string
}{
AgentID: agent.ID.String(),
BinaryURL: binaryURL,
@@ -40,6 +44,10 @@ func (s *InstallTemplateService) RenderInstallScript(agent *models.Agent, binary
Platform: agent.OSType,
Architecture: agent.OSArchitecture,
Version: agent.CurrentVersion,
+ AgentUser: "redflag-agent",
+ AgentHome: "/var/lib/redflag-agent",
+ ConfigDir: "/etc/redflag",
+ LogDir: "/var/log/redflag",
}
// Choose template based on platform
@@ -90,6 +98,10 @@ func (s *InstallTemplateService) RenderInstallScriptFromBuild(
Version string
ServerURL string
RegistrationToken string
+ AgentUser string
+ AgentHome string
+ ConfigDir string
+ LogDir string
}{
AgentID: agentID,
BinaryURL: binaryURL,
@@ -99,6 +111,10 @@ func (s *InstallTemplateService) RenderInstallScriptFromBuild(
Version: version,
ServerURL: serverURL,
RegistrationToken: registrationToken,
+ AgentUser: "redflag-agent",
+ AgentHome: "/var/lib/redflag-agent",
+ ConfigDir: "/etc/redflag",
+ LogDir: "/var/log/redflag",
}
templateName := "templates/install/scripts/linux.sh.tmpl"
@@ -144,6 +160,10 @@ func (s *InstallTemplateService) BuildAgentConfigWithAgentID(
Architecture string
Version string
ServerURL string
+ AgentUser string
+ AgentHome string
+ ConfigDir string
+ LogDir string
}{
AgentID: agentID,
BinaryURL: binaryURL,
@@ -152,6 +172,10 @@ func (s *InstallTemplateService) BuildAgentConfigWithAgentID(
Architecture: architecture,
Version: version,
ServerURL: serverURL,
+ AgentUser: "redflag-agent",
+ AgentHome: "/var/lib/redflag-agent",
+ ConfigDir: "/etc/redflag",
+ LogDir: "/var/log/redflag",
}
templateName := "templates/install/scripts/linux.sh.tmpl"
diff --git a/aggregator-server/internal/services/security_settings_service.go b/aggregator-server/internal/services/security_settings_service.go
new file mode 100644
index 0000000..b0f1496
--- /dev/null
+++ b/aggregator-server/internal/services/security_settings_service.go
@@ -0,0 +1,469 @@
+package services
+
+import (
+ "crypto/aes"
+ "crypto/cipher"
+ "crypto/rand"
+ "encoding/base64"
+ "encoding/json"
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+
+ "github.com/Fimeg/RedFlag/aggregator-server/internal/database/queries"
+ "github.com/google/uuid"
+)
+
+type SecuritySettingsService struct {
+ settingsQueries *queries.SecuritySettingsQueries
+ signingService *SigningService
+ encryptionKey []byte
+}
+
+// NewSecuritySettingsService creates a new security settings service
+func NewSecuritySettingsService(settingsQueries *queries.SecuritySettingsQueries, signingService *SigningService) (*SecuritySettingsService, error) {
+ // Get encryption key from environment or generate one
+ keyStr := os.Getenv("REDFLAG_SETTINGS_ENCRYPTION_KEY")
+ var key []byte
+ var err error
+
+ if keyStr != "" {
+ key, err = base64.StdEncoding.DecodeString(keyStr)
+ if err != nil {
+ return nil, fmt.Errorf("invalid encryption key format: %w", err)
+ }
+ } else {
+ // Generate a new key (in production, this should be persisted)
+ key = make([]byte, 32) // AES-256
+ if _, err := rand.Read(key); err != nil {
+ return nil, fmt.Errorf("failed to generate encryption key: %w", err)
+ }
+ }
+
+ return &SecuritySettingsService{
+ settingsQueries: settingsQueries,
+ signingService: signingService,
+ encryptionKey: key,
+ }, nil
+}
+
+// GetSetting retrieves a security setting with proper priority resolution
+func (s *SecuritySettingsService) GetSetting(category, key string) (interface{}, error) {
+ // Priority 1: Environment variables
+ if envValue := s.getEnvironmentValue(category, key); envValue != nil {
+ return envValue, nil
+ }
+
+ // Priority 2: Config file values (this would be implemented based on your config structure)
+ if configValue := s.getConfigValue(category, key); configValue != nil {
+ return configValue, nil
+ }
+
+ // Priority 3: Database settings
+ if dbSetting, err := s.settingsQueries.GetSetting(category, key); err == nil && dbSetting != nil {
+ var value interface{}
+ if dbSetting.IsEncrypted {
+ decrypted, err := s.decrypt(dbSetting.Value)
+ if err != nil {
+ return nil, fmt.Errorf("failed to decrypt setting: %w", err)
+ }
+ if err := json.Unmarshal([]byte(decrypted), &value); err != nil {
+ return nil, fmt.Errorf("failed to unmarshal decrypted setting: %w", err)
+ }
+ } else {
+ if err := json.Unmarshal([]byte(dbSetting.Value), &value); err != nil {
+ return nil, fmt.Errorf("failed to unmarshal setting: %w", err)
+ }
+ }
+ return value, nil
+ }
+
+ // Priority 4: Hardcoded defaults
+ if defaultValue := s.getDefaultValue(category, key); defaultValue != nil {
+ return defaultValue, nil
+ }
+
+ return nil, fmt.Errorf("setting not found: %s.%s", category, key)
+}
+
+// SetSetting updates a security setting with validation and audit logging
+func (s *SecuritySettingsService) SetSetting(category, key string, value interface{}, userID uuid.UUID, reason string) error {
+ // Validate the setting
+ if err := s.ValidateSetting(category, key, value); err != nil {
+ return fmt.Errorf("validation failed: %w", err)
+ }
+
+ // Check if setting is sensitive and should be encrypted
+ isEncrypted := s.isSensitiveSetting(category, key)
+
+ // Check if setting exists
+ existing, err := s.settingsQueries.GetSetting(category, key)
+ if err != nil {
+ return fmt.Errorf("failed to check existing setting: %w", err)
+ }
+
+ var oldValue *string
+ var settingID uuid.UUID
+
+ if existing != nil {
+ // Update existing setting
+ updated, oldVal, err := s.settingsQueries.UpdateSetting(category, key, value, &userID)
+ if err != nil {
+ return fmt.Errorf("failed to update setting: %w", err)
+ }
+ oldValue = oldVal
+ settingID = updated.ID
+ } else {
+ // Create new setting
+ created, err := s.settingsQueries.CreateSetting(category, key, value, isEncrypted, &userID)
+ if err != nil {
+ return fmt.Errorf("failed to create setting: %w", err)
+ }
+ settingID = created.ID
+ }
+
+ // Create audit log
+ valueJSON, _ := json.Marshal(value)
+ if err := s.settingsQueries.CreateAuditLog(
+ settingID,
+ userID,
+ "update",
+ stringOrNil(oldValue),
+ string(valueJSON),
+ reason,
+ ); err != nil {
+ // Log error but don't fail the operation
+ fmt.Printf("Warning: failed to create audit log: %v\n", err)
+ }
+
+ return nil
+}
+
+// GetAllSettings retrieves all security settings organized by category
+func (s *SecuritySettingsService) GetAllSettings() (map[string]map[string]interface{}, error) {
+ // Get all default values first
+ result := s.getDefaultSettings()
+
+ // Override with database settings
+ dbSettings, err := s.settingsQueries.GetAllSettings()
+ if err != nil {
+ return nil, fmt.Errorf("failed to get database settings: %w", err)
+ }
+
+ for _, setting := range dbSettings {
+ var value interface{}
+ if setting.IsEncrypted {
+ decrypted, err := s.decrypt(setting.Value)
+ if err != nil {
+ return nil, fmt.Errorf("failed to decrypt setting %s.%s: %w", setting.Category, setting.Key, err)
+ }
+ if err := json.Unmarshal([]byte(decrypted), &value); err != nil {
+ return nil, fmt.Errorf("failed to unmarshal decrypted setting %s.%s: %w", setting.Category, setting.Key, err)
+ }
+ } else {
+ if err := json.Unmarshal([]byte(setting.Value), &value); err != nil {
+ return nil, fmt.Errorf("failed to unmarshal setting %s.%s: %w", setting.Category, setting.Key, err)
+ }
+ }
+
+ if result[setting.Category] == nil {
+ result[setting.Category] = make(map[string]interface{})
+ }
+ result[setting.Category][setting.Key] = value
+ }
+
+ // Override with config file settings
+ for category, settings := range result {
+ for key := range settings {
+ if configValue := s.getConfigValue(category, key); configValue != nil {
+ result[category][key] = configValue
+ }
+ }
+ }
+
+ // Override with environment variables
+ for category, settings := range result {
+ for key := range settings {
+ if envValue := s.getEnvironmentValue(category, key); envValue != nil {
+ result[category][key] = envValue
+ }
+ }
+ }
+
+ return result, nil
+}
+
+// GetSettingsByCategory retrieves all settings for a specific category
+func (s *SecuritySettingsService) GetSettingsByCategory(category string) (map[string]interface{}, error) {
+ allSettings, err := s.GetAllSettings()
+ if err != nil {
+ return nil, err
+ }
+
+ if categorySettings, exists := allSettings[category]; exists {
+ return categorySettings, nil
+ }
+
+ return nil, fmt.Errorf("category not found: %s", category)
+}
+
+// ValidateSetting validates a security setting value
+func (s *SecuritySettingsService) ValidateSetting(category, key string, value interface{}) error {
+ switch fmt.Sprintf("%s.%s", category, key) {
+ case "nonce_validation.timeout_seconds":
+ if timeout, ok := value.(float64); ok {
+ if timeout < 60 || timeout > 3600 {
+ return fmt.Errorf("nonce timeout must be between 60 and 3600 seconds")
+ }
+ } else {
+ return fmt.Errorf("nonce timeout must be a number")
+ }
+
+ case "command_signing.enforcement_mode", "update_signing.enforcement_mode", "machine_binding.enforcement_mode":
+ if mode, ok := value.(string); ok {
+ validModes := []string{"strict", "warning", "disabled"}
+ valid := false
+ for _, m := range validModes {
+ if mode == m {
+ valid = true
+ break
+ }
+ }
+ if !valid {
+ return fmt.Errorf("enforcement mode must be one of: strict, warning, disabled")
+ }
+ } else {
+ return fmt.Errorf("enforcement mode must be a string")
+ }
+
+ case "signature_verification.log_retention_days":
+ if days, ok := value.(float64); ok {
+ if days < 1 || days > 365 {
+ return fmt.Errorf("log retention must be between 1 and 365 days")
+ }
+ } else {
+ return fmt.Errorf("log retention must be a number")
+ }
+
+ case "command_signing.algorithm", "update_signing.algorithm":
+ if algo, ok := value.(string); ok {
+ if algo != "ed25519" {
+ return fmt.Errorf("only ed25519 algorithm is currently supported")
+ }
+ } else {
+ return fmt.Errorf("algorithm must be a string")
+ }
+ }
+
+ return nil
+}
+
+// InitializeDefaultSettings creates default settings in the database if they don't exist
+func (s *SecuritySettingsService) InitializeDefaultSettings() error {
+ defaults := s.getDefaultSettings()
+
+ for category, settings := range defaults {
+ for key, value := range settings {
+ existing, err := s.settingsQueries.GetSetting(category, key)
+ if err != nil {
+ return fmt.Errorf("failed to check existing setting %s.%s: %w", category, key, err)
+ }
+
+ if existing == nil {
+ isEncrypted := s.isSensitiveSetting(category, key)
+ _, err := s.settingsQueries.CreateSetting(category, key, value, isEncrypted, nil)
+ if err != nil {
+ return fmt.Errorf("failed to create default setting %s.%s: %w", category, key, err)
+ }
+ }
+ }
+ }
+
+ return nil
+}
+
+// Helper methods
+
+func (s *SecuritySettingsService) getDefaultSettings() map[string]map[string]interface{} {
+ return map[string]map[string]interface{}{
+ "command_signing": {
+ "enabled": true,
+ "enforcement_mode": "strict",
+ "algorithm": "ed25519",
+ },
+ "update_signing": {
+ "enabled": true,
+ "enforcement_mode": "strict",
+ "allow_unsigned": false,
+ },
+ "nonce_validation": {
+ "timeout_seconds": 600,
+ "reject_expired": true,
+ "log_expired_attempts": true,
+ },
+ "machine_binding": {
+ "enabled": true,
+ "enforcement_mode": "strict",
+ "strict_action": "reject",
+ },
+ "signature_verification": {
+ "log_level": "warn",
+ "log_retention_days": 30,
+ "log_failures": true,
+ "alert_on_failure": true,
+ },
+ }
+}
+
+func (s *SecuritySettingsService) getDefaultValue(category, key string) interface{} {
+ defaults := s.getDefaultSettings()
+ if cat, exists := defaults[category]; exists {
+ if value, exists := cat[key]; exists {
+ return value
+ }
+ }
+ return nil
+}
+
+func (s *SecuritySettingsService) getEnvironmentValue(category, key string) interface{} {
+ envKey := fmt.Sprintf("REDFLAG_%s_%s", strings.ToUpper(category), strings.ToUpper(key))
+ envValue := os.Getenv(envKey)
+ if envValue == "" {
+ return nil
+ }
+
+ // Try to parse as boolean
+ if strings.ToLower(envValue) == "true" {
+ return true
+ }
+ if strings.ToLower(envValue) == "false" {
+ return false
+ }
+
+ // Try to parse as number
+ if num, err := strconv.ParseFloat(envValue, 64); err == nil {
+ return num
+ }
+
+ // Return as string
+ return envValue
+}
+
+func (s *SecuritySettingsService) getConfigValue(category, key string) interface{} {
+ // This would be implemented based on your config structure
+ // For now, returning nil to prioritize env vars and database
+ return nil
+}
+
+func (s *SecuritySettingsService) isSensitiveSetting(category, key string) bool {
+ // Define which settings are sensitive and should be encrypted
+ sensitive := map[string]bool{
+ "command_signing.private_key": true,
+ "update_signing.private_key": true,
+ "machine_binding.server_key": true,
+ "encryption.master_key": true,
+ }
+
+ settingKey := fmt.Sprintf("%s.%s", category, key)
+ return sensitive[settingKey]
+}
+
+func (s *SecuritySettingsService) encrypt(value string) (string, error) {
+ block, err := aes.NewCipher(s.encryptionKey)
+ if err != nil {
+ return "", err
+ }
+
+ gcm, err := cipher.NewGCM(block)
+ if err != nil {
+ return "", err
+ }
+
+ nonce := make([]byte, gcm.NonceSize())
+ if _, err := rand.Read(nonce); err != nil {
+ return "", err
+ }
+
+ ciphertext := gcm.Seal(nonce, nonce, []byte(value), nil)
+ return base64.StdEncoding.EncodeToString(ciphertext), nil
+}
+
+func (s *SecuritySettingsService) decrypt(encryptedValue string) (string, error) {
+ data, err := base64.StdEncoding.DecodeString(encryptedValue)
+ if err != nil {
+ return "", err
+ }
+
+ block, err := aes.NewCipher(s.encryptionKey)
+ if err != nil {
+ return "", err
+ }
+
+ gcm, err := cipher.NewGCM(block)
+ if err != nil {
+ return "", err
+ }
+
+ nonceSize := gcm.NonceSize()
+ if len(data) < nonceSize {
+ return "", fmt.Errorf("ciphertext too short")
+ }
+
+ nonce, ciphertext := data[:nonceSize], data[nonceSize:]
+ plaintext, err := gcm.Open(nil, nonce, ciphertext, nil)
+ if err != nil {
+ return "", err
+ }
+
+ return string(plaintext), nil
+}
+
+func stringOrNil(s *string) string {
+ if s == nil {
+ return ""
+ }
+ return *s
+}
+
+// GetNonceTimeout returns the current nonce validation timeout in seconds
+func (s *SecuritySettingsService) GetNonceTimeout() (int, error) {
+ value, err := s.GetSetting("nonce_validation", "timeout_seconds")
+ if err != nil {
+ return 600, err // Return default on error
+ }
+
+ if timeout, ok := value.(float64); ok {
+ return int(timeout), nil
+ }
+
+ return 600, nil // Return default if type is wrong
+}
+
+// GetEnforcementMode returns the enforcement mode for a given category
+func (s *SecuritySettingsService) GetEnforcementMode(category string) (string, error) {
+ value, err := s.GetSetting(category, "enforcement_mode")
+ if err != nil {
+ return "strict", err // Return default on error
+ }
+
+ if mode, ok := value.(string); ok {
+ return mode, nil
+ }
+
+ return "strict", nil // Return default if type is wrong
+}
+
+// IsSignatureVerificationEnabled returns whether signature verification is enabled for a category
+func (s *SecuritySettingsService) IsSignatureVerificationEnabled(category string) (bool, error) {
+ value, err := s.GetSetting(category, "enabled")
+ if err != nil {
+ return true, err // Return default on error
+ }
+
+ if enabled, ok := value.(bool); ok {
+ return enabled, nil
+ }
+
+ return true, nil // Return default if type is wrong
+}
\ No newline at end of file
diff --git a/aggregator-server/internal/services/signing.go b/aggregator-server/internal/services/signing.go
index b101962..343afe4 100644
--- a/aggregator-server/internal/services/signing.go
+++ b/aggregator-server/internal/services/signing.go
@@ -4,6 +4,7 @@ import (
"crypto/ed25519"
"crypto/sha256"
"encoding/hex"
+ "encoding/json"
"fmt"
"io"
"os"
@@ -18,10 +19,18 @@ import (
type SigningService struct {
privateKey ed25519.PrivateKey
publicKey ed25519.PublicKey
+ enabled bool
}
// NewSigningService creates a new signing service with the provided private key
func NewSigningService(privateKeyHex string) (*SigningService, error) {
+ // Check if private key is provided
+ if privateKeyHex == "" {
+ return &SigningService{
+ enabled: false,
+ }, nil
+ }
+
// Decode private key from hex
privateKeyBytes, err := hex.DecodeString(privateKeyHex)
if err != nil {
@@ -39,11 +48,21 @@ func NewSigningService(privateKeyHex string) (*SigningService, error) {
return &SigningService{
privateKey: privateKey,
publicKey: publicKey,
+ enabled: true,
}, nil
}
+// IsEnabled returns true if the signing service is enabled
+func (s *SigningService) IsEnabled() bool {
+ return s.enabled
+}
+
// SignFile signs a file and returns the signature and checksum
func (s *SigningService) SignFile(filePath string) (*models.AgentUpdatePackage, error) {
+ // Check if signing is enabled
+ if !s.enabled {
+ return nil, fmt.Errorf("signing service is disabled")
+ }
// Read the file
file, err := os.Open(filePath)
if err != nil {
@@ -106,11 +125,17 @@ func (s *SigningService) VerifySignature(content []byte, signatureHex string) (b
// GetPublicKey returns the public key in hex format
func (s *SigningService) GetPublicKey() string {
+ if !s.enabled {
+ return ""
+ }
return hex.EncodeToString(s.publicKey)
}
// GetPublicKeyFingerprint returns a short fingerprint of the public key
func (s *SigningService) GetPublicKeyFingerprint() string {
+ if !s.enabled {
+ return ""
+ }
// Use first 8 bytes as fingerprint
return hex.EncodeToString(s.publicKey[:8])
}
@@ -223,6 +248,29 @@ func (s *SigningService) VerifyNonce(nonceUUID uuid.UUID, timestamp time.Time, s
return valid, nil
}
+// SignCommand creates an Ed25519 signature for a command
+func (s *SigningService) SignCommand(cmd *models.AgentCommand) (string, error) {
+ if s.privateKey == nil {
+ return "", fmt.Errorf("signing service not initialized with private key")
+ }
+
+ // Serialize command data for signing
+ // Format: {id}:{command_type}:{params_hash}
+ // Note: Only sign what we send to the agent (ID, Type, Params)
+ paramsJSON, _ := json.Marshal(cmd.Params)
+ paramsHash := sha256.Sum256(paramsJSON)
+ paramsHashHex := hex.EncodeToString(paramsHash[:])
+
+ message := fmt.Sprintf("%s:%s:%s",
+ cmd.ID.String(),
+ cmd.CommandType,
+ paramsHashHex)
+
+ // Sign with Ed25519
+ signature := ed25519.Sign(s.privateKey, []byte(message))
+ return hex.EncodeToString(signature), nil
+}
+
// TODO: Key rotation implementation
// This is a stub for future key rotation functionality
// Key rotation should:
diff --git a/aggregator-server/internal/services/templates/install/scripts/linux.sh.tmpl b/aggregator-server/internal/services/templates/install/scripts/linux.sh.tmpl
index e597f2f..043bf7c 100644
--- a/aggregator-server/internal/services/templates/install/scripts/linux.sh.tmpl
+++ b/aggregator-server/internal/services/templates/install/scripts/linux.sh.tmpl
@@ -7,6 +7,33 @@
set -e
+# Check if running as root (required for user creation and sudoers)
+if [ "$EUID" -ne 0 ]; then
+ echo "ERROR: This script must be run as root for secure installation (use sudo)"
+ exit 1
+fi
+
+AGENT_USER="redflag-agent"
+AGENT_HOME="/var/lib/redflag-agent"
+SUDOERS_FILE="/etc/sudoers.d/redflag-agent"
+
+# Function to detect package manager
+detect_package_manager() {
+ if command -v apt-get &> /dev/null; then
+ echo "apt"
+ elif command -v dnf &> /dev/null; then
+ echo "dnf"
+ elif command -v yum &> /dev/null; then
+ echo "yum"
+ elif command -v pacman &> /dev/null; then
+ echo "pacman"
+ elif command -v zypper &> /dev/null; then
+ echo "zypper"
+ else
+ echo "unknown"
+ fi
+}
+
AGENT_ID="{{.AgentID}}"
BINARY_URL="{{.BinaryURL}}"
CONFIG_URL="{{.ConfigURL}}"
@@ -17,6 +44,9 @@ SERVICE_NAME="redflag-agent"
VERSION="{{.Version}}"
LOG_DIR="/var/log/redflag"
BACKUP_DIR="${CONFIG_DIR}/backups/backup.$(date +%s)"
+AGENT_USER="redflag-agent"
+AGENT_HOME="/var/lib/redflag-agent"
+SUDOERS_FILE="/etc/sudoers.d/redflag-agent"
echo "=== RedFlag Agent v${VERSION} Installation ==="
echo "Agent ID: ${AGENT_ID}"
@@ -44,23 +74,98 @@ if [ "${MIGRATION_NEEDED}" = true ]; then
echo "=== Migration Required ==="
echo "Agent will migrate on first start. Backing up configuration..."
sudo mkdir -p "${BACKUP_DIR}"
-
+
if [ -f "${OLD_CONFIG_DIR}/config.json" ]; then
echo "Backing up old configuration..."
sudo cp -r "${OLD_CONFIG_DIR}"/* "${BACKUP_DIR}/" 2>/dev/null || true
fi
-
+
if [ -f "${CONFIG_DIR}/config.json" ]; then
echo "Backing up current configuration..."
sudo cp "${CONFIG_DIR}/config.json" "${BACKUP_DIR}/config.json.backup" 2>/dev/null || true
fi
-
+
echo "Migration will run automatically when agent starts."
echo "View migration logs with: sudo journalctl -u ${SERVICE_NAME} -f"
echo
fi
-# Step 3: Stop existing service
+# Step 3: Create system user and home directory
+echo "Creating system user for agent..."
+if id "$AGENT_USER" &>/dev/null; then
+ echo "β User $AGENT_USER already exists"
+else
+ sudo useradd -r -s /bin/false -d "$AGENT_HOME" "$AGENT_USER"
+ echo "β User $AGENT_USER created"
+fi
+
+# Create home directory
+if [ ! -d "$AGENT_HOME" ]; then
+ sudo mkdir -p "$AGENT_HOME"
+ sudo chown "$AGENT_USER:$AGENT_USER" "$AGENT_HOME"
+ sudo chmod 750 "$AGENT_HOME"
+ echo "β Home directory created at $AGENT_HOME"
+fi
+
+# Step 4: Install sudoers configuration with OS-specific commands
+PM=$(detect_package_manager)
+echo "Detected package manager: $PM"
+echo "Installing sudoers configuration..."
+
+case "$PM" in
+ apt)
+ cat <<'EOF' | sudo tee "$SUDOERS_FILE" > /dev/null
+# RedFlag Agent minimal sudo permissions - APT
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/apt-get update
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/apt-get install -y *
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/apt-get upgrade -y
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/apt-get install --dry-run --yes *
+EOF
+ ;;
+ dnf|yum)
+ cat <<'EOF' | sudo tee "$SUDOERS_FILE" > /dev/null
+# RedFlag Agent minimal sudo permissions - DNF/YUM
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/dnf makecache
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/dnf install -y *
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/dnf upgrade -y
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/yum makecache
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/yum install -y *
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/yum update -y
+EOF
+ ;;
+ pacman)
+ cat <<'EOF' | sudo tee "$SUDOERS_FILE" > /dev/null
+# RedFlag Agent minimal sudo permissions - Pacman
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/pacman -Sy
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/pacman -S --noconfirm *
+EOF
+ ;;
+ *)
+ cat <<'EOF' | sudo tee "$SUDOERS_FILE" > /dev/null
+# RedFlag Agent minimal sudo permissions - Generic (APT and DNF)
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/apt-get update
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/apt-get install -y *
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/dnf makecache
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/dnf install -y *
+EOF
+ ;;
+esac
+
+# Add Docker commands
+cat <<'DOCKER_EOF' | sudo tee -a "$SUDOERS_FILE" > /dev/null
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/docker pull *
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/docker image inspect *
+{{.AgentUser}} ALL=(root) NOPASSWD: /usr/bin/docker manifest inspect *
+DOCKER_EOF
+
+sudo chmod 440 "$SUDOERS_FILE"
+if visudo -c -f "$SUDOERS_FILE" &>/dev/null; then
+ echo "β Sudoers configuration installed and validated"
+else
+ echo "β Sudoers configuration validation failed - using generic version"
+fi
+
+# Step 5: Stop existing service
if systemctl is-active --quiet ${SERVICE_NAME} 2>/dev/null; then
echo "Stopping existing RedFlag agent service..."
sudo systemctl stop ${SERVICE_NAME}
@@ -70,7 +175,7 @@ fi
echo "Creating directories..."
sudo mkdir -p "${CONFIG_DIR}"
sudo mkdir -p "${CONFIG_DIR}/backups"
-sudo mkdir -p "/var/lib/redflag"
+sudo mkdir -p "$AGENT_HOME"
sudo mkdir -p "/var/log/redflag"
# Step 5: Download agent binary
@@ -88,7 +193,7 @@ if [ -f "${CONFIG_DIR}/config.json" ]; then
else
echo "[CONFIG] Fresh install - generating minimal configuration with registration token"
# Create minimal config template - agent will populate missing fields on first start
- sudo cat > "${CONFIG_DIR}/config.json" < /dev/null < "3")
+ MinAgentVersion: "0.1.22",
+ BuildTime: time.Now(),
+ }
+}
+
+// ExtractConfigVersionFromAgent extracts config version from agent version
+// Agent version format: v0.1.23.6 where fourth octet maps to config version
+func ExtractConfigVersionFromAgent(agentVersion string) string {
+ // Strip 'v' prefix if present
+ cleanVersion := agentVersion
+ if len(cleanVersion) > 0 && cleanVersion[0] == 'v' {
+ cleanVersion = cleanVersion[1:]
+ }
+
+ // Split version parts
+ parts := fmt.Sprintf("%s", cleanVersion)
+ if len(parts) >= 1 {
+ // For now, use the last octet as config version
+ // v0.1.23 -> "3" (last digit)
+ lastChar := parts[len(parts)-1:]
+ return lastChar
+ }
+
+ // Default fallback
+ return "3"
+}
+
+// ValidateAgentVersion checks if an agent version is compatible
+func ValidateAgentVersion(agentVersion string) error {
+ current := GetCurrentVersions()
+
+ // Check minimum version
+ if agentVersion < current.MinAgentVersion {
+ return fmt.Errorf("agent version %s is below minimum %s", agentVersion, current.MinAgentVersion)
+ }
+
+ return nil
+}
+
+// GetBuildFlags returns the ldflags to inject versions into agent builds
+func GetBuildFlags() []string {
+ versions := GetCurrentVersions()
+ return []string{
+ fmt.Sprintf("-X github.com/Fimeg/RedFlag/aggregator-agent/internal/version.Version=%s", versions.AgentVersion),
+ fmt.Sprintf("-X github.com/Fimeg/RedFlag/aggregator-agent/internal/version.ConfigVersion=%s", versions.ConfigVersion),
+ fmt.Sprintf("-X github.com/Fimeg/RedFlag/aggregator-agent/internal/version.BuildTime=%s", versions.BuildTime.Format(time.RFC3339)),
+ }
+}
\ No newline at end of file
diff --git a/aggregator-server/test-scheduler b/aggregator-server/test-scheduler
deleted file mode 100644
index ce92c5f..0000000
Binary files a/aggregator-server/test-scheduler and /dev/null differ
diff --git a/aggregator-server/test-server b/aggregator-server/test-server
deleted file mode 100755
index 3998dbb..0000000
Binary files a/aggregator-server/test-server and /dev/null differ
diff --git a/aggregator-web/src/components/AgentScanners.tsx b/aggregator-web/src/components/AgentScanners.tsx
index ac8b535..df129f5 100644
--- a/aggregator-web/src/components/AgentScanners.tsx
+++ b/aggregator-web/src/components/AgentScanners.tsx
@@ -1,4 +1,4 @@
-import React from 'react';
+import React, { useState } from 'react';
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query';
import {
RefreshCw,
@@ -20,7 +20,7 @@ import { agentApi, securityApi } from '@/lib/api';
import toast from 'react-hot-toast';
import { cn } from '@/lib/utils';
import { AgentSubsystem } from '@/types';
-import { AgentUpdate } from './AgentUpdate';
+import { AgentUpdatesModal } from './AgentUpdatesModal';
interface AgentScannersProps {
agentId: string;
@@ -241,23 +241,21 @@ export function AgentScanners({ agentId }: AgentScannersProps) {
const autoRunCount = subsystems.filter(s => s.auto_run && s.enabled).length;
return (
-
- {/* Subsystem Configuration Table */}
-
-
-
-
Subsystem Configuration
-
- {enabledCount} enabled
- {autoRunCount} auto-running
- β’ {subsystems.length} total
-
+
+ {/* Subsystems Section - Continuous Surface */}
+
+
+
+
Subsystems
+
+ {enabledCount} enabled β’ {autoRunCount} auto-running β’ {subsystems.length} total
+
setShowUpdateModal(true)}
- className="flex items-center space-x-1 px-3 py-1 text-xs text-blue-600 hover:text-blue-800 border border-blue-300 hover:bg-blue-50 rounded-md transition-colors"
+ className="text-sm text-primary-600 hover:text-primary-800 flex items-center space-x-1 border border-primary-300 px-2 py-1 rounded"
>
-
+
Update Agent
@@ -404,17 +402,12 @@ export function AgentScanners({ agentId }: AgentScannersProps) {
)}
- {/* Note */}
-
- Subsystems report specific metrics on scheduled intervals. Enable auto-run to schedule automatic scans, or use Actions to trigger manual scans.
-
-
- {/* Security Health */}
-
-
+ {/* Security Health Section - Continuous Surface */}
+
+
-
Security Health
+ Security Health
queryClient.invalidateQueries({ queryKey: ['security-overview'] })}
@@ -427,243 +420,205 @@ export function AgentScanners({ agentId }: AgentScannersProps) {
{securityLoading ? (
-
+
Loading security status...
) : securityOverview ? (
-
- {/* Overall Security Status */}
-
-
-
-
-
-
Overall Security Status
-
- {securityOverview.overall_status === 'healthy' ? 'All systems nominal' :
- securityOverview.overall_status === 'degraded' ? `${securityOverview.alerts.length} active issue(s)` :
- 'Critical issues detected'}
-
-
-
+
+ {/* Overall Status - Compact */}
+
+
- {securityOverview.overall_status === 'healthy' &&
}
- {securityOverview.overall_status === 'degraded' &&
}
- {securityOverview.overall_status === 'unhealthy' &&
}
- {securityOverview.overall_status.toUpperCase()}
+ 'w-3 h-3 rounded-full',
+ securityOverview.overall_status === 'healthy' ? 'bg-green-500' :
+ securityOverview.overall_status === 'degraded' ? 'bg-amber-500' : 'bg-red-500'
+ )}>
+
+
Overall Status
+
+ {securityOverview.overall_status === 'healthy' ? 'All systems nominal' :
+ securityOverview.overall_status === 'degraded' ? `${securityOverview.alerts.length} issue(s)` :
+ 'Critical issues'}
+
+
+ {securityOverview.overall_status === 'healthy' &&
}
+ {securityOverview.overall_status === 'degraded' &&
}
+ {securityOverview.overall_status === 'unhealthy' &&
}
+ {securityOverview.overall_status.toUpperCase()}
+
- {/* Enhanced Security Metrics */}
-
-
- {Object.entries(securityOverview.subsystems).map(([key, subsystem]) => {
- const display = getSecurityStatusDisplay(subsystem.status);
- const getEnhancedTooltip = (subsystemType: string, status: string) => {
- switch (subsystemType) {
- case 'command_validation':
- const cmdSubsystem = securityOverview.subsystems.command_validation || {};
- const cmdMetrics = cmdSubsystem.metrics || {};
- return `Commands processed: ${cmdMetrics.commands_last_hour || 0}. Failures: 0 (last 24h). Pending: ${cmdMetrics.total_pending_commands || 0}.`;
- case 'ed25519_signing':
- const signingSubsystem = securityOverview.subsystems.ed25519_signing || {};
- const signingChecks = signingSubsystem.checks || {};
- return `Fingerprint: ${signingChecks.public_key_fingerprint || 'Not available'}. Algorithm: ${signingChecks.algorithm || 'Ed25519'}. Valid since: ${new Date(securityOverview.timestamp).toLocaleDateString()}.`;
- case 'machine_binding':
- const bindingSubsystem = securityOverview.subsystems.machine_binding || {};
- const bindingChecks = bindingSubsystem.checks || {};
- return `Bound agents: ${bindingChecks.bound_agents || 'Unknown'}. Violations (24h): ${bindingChecks.recent_violations || 0}. Enforcement: Hardware fingerprint. Min version: ${bindingChecks.min_agent_version || 'v0.1.22'}.`;
- case 'nonce_validation':
- const nonceSubsystem = securityOverview.subsystems.nonce_validation || {};
- const nonceChecks = nonceSubsystem.checks || {};
- return `Max age: ${nonceChecks.max_age_minutes || 5}min. Replays blocked (24h): ${nonceChecks.validation_failures || 0}. Format: ${nonceChecks.nonce_format || 'UUID:Timestamp'}.`;
- default:
- return `Status: ${status}. Enabled: ${subsystem.enabled}`;
- }
- };
+ {/* Security Grid - 2x2 Layout */}
+
+ {Object.entries(securityOverview.subsystems).map(([key, subsystem]) => {
+ const statusColors = {
+ healthy: 'bg-green-100 text-green-700 border-green-200',
+ enforced: 'bg-blue-100 text-blue-700 border-blue-200',
+ degraded: 'bg-amber-100 text-amber-700 border-amber-200',
+ unhealthy: 'bg-red-100 text-red-700 border-red-200'
+ };
- const getEnhancedSubtitle = (subsystemType: string, status: string) => {
- switch (subsystemType) {
- case 'command_validation':
- const cmdSubsystem = securityOverview.subsystems.command_validation || {};
- const cmdMetrics = cmdSubsystem.metrics || {};
- const pendingCount = cmdMetrics.total_pending_commands || 0;
- return pendingCount > 0 ? `Operational - ${pendingCount} pending` : 'Operational - 0 failures';
- case 'ed25519_signing':
- const signingSubsystem = securityOverview.subsystems.ed25519_signing || {};
- const signingChecks = signingSubsystem.checks || {};
- return signingChecks.signing_operational ? 'Enabled - Key valid' : 'Disabled - Invalid key';
- case 'machine_binding':
- const bindingSubsystem = securityOverview.subsystems.machine_binding || {};
- const bindingChecks = bindingSubsystem.checks || {};
- const violations = bindingChecks.recent_violations || 0;
- return status === 'healthy' || status === 'enforced' ? `Enforced - ${violations} violations` : 'Violations detected';
- case 'nonce_validation':
- const nonceSubsystem = securityOverview.subsystems.nonce_validation || {};
- const nonceChecks = nonceSubsystem.checks || {};
- const maxAge = nonceChecks.max_age_minutes || 5;
- const failures = nonceChecks.validation_failures || 0;
- return `Enabled - ${maxAge}min window, ${failures} blocked`;
- default:
- return `${subsystem.enabled ? 'Enabled' : 'Disabled'} - ${status}`;
- }
- };
-
- const getDetailedSecurityInfo = (subsystemType: string, subsystem: any) => {
- if (!securityOverview?.subsystems[subsystemType]) return '';
-
- const subsystemData = securityOverview.subsystems[subsystemType];
- const checks = subsystemData.checks || {};
- const metrics = subsystemData.metrics || {};
-
- switch (subsystemType) {
- case 'nonce_validation':
- return `Nonces: ${metrics.total_pending_commands || 0} pending. Max age: ${checks.max_age_minutes || 5}min. Failures: ${checks.validation_failures || 0}. Format: ${checks.nonce_format || 'UUID:Timestamp'}`;
- case 'machine_binding':
- return `Machine ID: ${checks.machine_id_type || 'Hardware fingerprint'}. Bound agents: ${checks.bound_agents || 'Unknown'}. Violations: ${checks.recent_violations || 0}. Min version: ${checks.min_agent_version || 'v0.1.22'}`;
- case 'ed25519_signing':
- return `Key: ${checks.public_key_fingerprint?.substring(0, 16) || 'Not available'}... Algorithm: ${checks.algorithm || 'Ed25519'}. Valid since: ${new Date(securityOverview.timestamp).toLocaleDateString()}`;
- default:
- return `Status: ${subsystem.status}. Last check: ${new Date(securityOverview.timestamp).toLocaleString()}`;
- }
- };
-
- return (
-
-
-
-
+ return (
+
+
+
+
+
{getSecurityIcon(key)}
+
+
+ {getSecurityDisplayName(key)}
+
+
+ {key === 'command_validation' ?
+ `${subsystem.metrics?.total_pending_commands || 0} pending` :
+ key === 'ed25519_signing' ?
+ 'Key valid' :
+ key === 'machine_binding' ?
+ `${subsystem.checks?.recent_violations || 0} violations` :
+ key === 'nonce_validation' ?
+ `${subsystem.checks?.validation_failures || 0} blocked` :
+ subsystem.status}
+
+
-
-
- {getSecurityDisplayName(key)}
-
-
-
- {getEnhancedSubtitle(key, subsystem.status)}
-
- {(key === 'nonce_validation' || key === 'machine_binding' || key === 'ed25519_signing') && (
-
-
- {getDetailedSecurityInfo(key, subsystem)}
-
-
- )}
+
+ {subsystem.status === 'healthy' &&
}
+ {subsystem.status === 'enforced' &&
}
+ {subsystem.status === 'degraded' &&
}
+ {subsystem.status === 'unhealthy' &&
}
-
- {subsystem.status === 'healthy' &&
}
- {subsystem.status === 'enforced' &&
}
- {subsystem.status === 'degraded' &&
}
- {subsystem.status === 'unhealthy' &&
}
- {subsystem.status.toUpperCase()}
-
- );
- })}
-
+
+ );
+ })}
- {/* Security Alerts - Frosted Glass Style */}
+ {/* Detailed Info Panel */}
+
+ {Object.entries(securityOverview.subsystems).map(([key, subsystem]) => {
+ const checks = subsystem.checks || {};
+
+ return (
+
+
+
+ {key === 'nonce_validation' ?
+ `Nonces: ${subsystem.metrics?.total_pending_commands || 0} | Max: ${checks.max_age_minutes || 5}m | Failures: ${checks.validation_failures || 0}` :
+ key === 'machine_binding' ?
+ `Bound: ${checks.bound_agents || 'N/A'} | Violations: ${checks.recent_violations || 0} | Method: Hardware` :
+ key === 'ed25519_signing' ?
+ `Key: ${checks.public_key_fingerprint?.substring(0, 16) || 'N/A'}... | Algo: ${checks.algorithm || 'Ed25519'}` :
+ key === 'command_validation' ?
+ `Processed: ${subsystem.metrics?.commands_last_hour || 0}/hr | Pending: ${subsystem.metrics?.total_pending_commands || 0}` :
+ `Status: ${subsystem.status}`}
+
+
+
+ );
+ })}
+
+
+ {/* Security Alerts & Recommendations */}
{(securityOverview.alerts.length > 0 || securityOverview.recommendations.length > 0) && (
-
+
{securityOverview.alerts.length > 0 && (
-
-
Security Alerts
-
- {securityOverview.alerts.map((alert, index) => (
-
-
- {alert}
-
+
+
+
+
Alerts ({securityOverview.alerts.length})
+
+
+ {securityOverview.alerts.slice(0, 1).map((alert, index) => (
+ β’ {alert}
))}
+ {securityOverview.alerts.length > 1 && (
+ +{securityOverview.alerts.length - 1} more
+ )}
)}
{securityOverview.recommendations.length > 0 && (
-
-
Recommendations
-
- {securityOverview.recommendations.map((recommendation, index) => (
-
-
- {recommendation}
-
+
+
+
+
Recs ({securityOverview.recommendations.length})
+
+
+ {securityOverview.recommendations.slice(0, 1).map((rec, index) => (
+ β’ {rec}
))}
+ {securityOverview.recommendations.length > 1 && (
+ +{securityOverview.recommendations.length - 1} more
+ )}
)}
)}
- {/* Last Updated */}
-
-
- Last updated: {new Date(securityOverview.timestamp).toLocaleString()}
+ {/* Stats Row */}
+
+
+
{Object.keys(securityOverview.subsystems).length}
+
Systems
+
+
+
+ {Object.values(securityOverview.subsystems).filter(s => s.status === 'healthy' || s.status === 'enforced').length}
+
+
Healthy
+
+
+
{securityOverview.alerts.length}
+
Alerts
+
+
+
+ {new Date(securityOverview.timestamp).toLocaleTimeString()}
+
+
Updated
) : (
-
-
-
Unable to load security status
-
Security monitoring may be unavailable
+
+
+
Unable to load security status
)}
- {/* Update Agent Modal */}
- {showUpdateModal && agent && (
-
-
setShowUpdateModal(false)} />
-
-
-
-
Update Agent: {agent.hostname}
- setShowUpdateModal(false)}
- className="text-gray-400 hover:text-gray-600 transition-colors"
- >
-
-
-
-
-
{
- setShowUpdateModal(false);
- queryClient.invalidateQueries({ queryKey: ['agent', agentId] });
- }}
- />
-
-
-
-
- )}
+ {/* Agent Updates Modal */}
+
{
+ setShowUpdateModal(false);
+ }}
+ selectedAgentIds={[agentId]} // Single agent for this scanner view
+ onAgentsUpdated={() => {
+ // Refresh agent and subsystems data after update
+ queryClient.invalidateQueries({ queryKey: ['agent', agentId] });
+ queryClient.invalidateQueries({ queryKey: ['subsystems', agentId] });
+ }}
+ />
);
}
diff --git a/aggregator-web/src/components/AgentUpdate.tsx b/aggregator-web/src/components/AgentUpdate.tsx
index b8b262c..ff4182e 100644
--- a/aggregator-web/src/components/AgentUpdate.tsx
+++ b/aggregator-web/src/components/AgentUpdate.tsx
@@ -172,12 +172,36 @@ export function AgentUpdate({ agent, onUpdateComplete, className }: AgentUpdateP
Update Agent: {agent.hostname}
-
- Update agent from {currentVersion} to {availableVersion} ?
-
-
- This will temporarily take the agent offline during the update process.
-
+
+ {/* Warning for same-version updates */}
+ {currentVersion === availableVersion ? (
+ <>
+
+
+ β οΈ Version appears identical
+
+
+ Current: {currentVersion} β Target: {availableVersion}
+
+
+ This will reinstall the current version. Useful if the binary was rebuilt or corrupted.
+
+
+
+ The agent will be temporarily offline during reinstallation.
+
+ >
+ ) : (
+ <>
+
+ Update agent from {currentVersion} to {availableVersion} ?
+
+
+ This will temporarily take the agent offline during the update process.
+
+ >
+ )}
+
setShowConfirmDialog(false)}
@@ -187,9 +211,14 @@ export function AgentUpdate({ agent, onUpdateComplete, className }: AgentUpdateP
- Update Agent
+ {currentVersion === availableVersion ? 'Reinstall Agent' : 'Update Agent'}
diff --git a/aggregator-web/src/components/AgentUpdatesEnhanced.tsx b/aggregator-web/src/components/AgentUpdatesEnhanced.tsx
index 0526a9d..8e75bd4 100644
--- a/aggregator-web/src/components/AgentUpdatesEnhanced.tsx
+++ b/aggregator-web/src/components/AgentUpdatesEnhanced.tsx
@@ -2,7 +2,6 @@ import { useState } from 'react';
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
import {
Search,
- Upload,
RefreshCw,
Terminal,
ChevronDown,
@@ -15,7 +14,6 @@ import { updateApi, agentApi } from '@/lib/api';
import toast from 'react-hot-toast';
import { cn } from '@/lib/utils';
import type { UpdatePackage } from '@/types';
-import { AgentUpdatesModal } from './AgentUpdatesModal';
interface AgentUpdatesEnhancedProps {
agentId: string;
@@ -50,7 +48,6 @@ export function AgentUpdatesEnhanced({ agentId }: AgentUpdatesEnhancedProps) {
const [selectedSeverity, setSelectedSeverity] = useState('all');
const [showLogsModal, setShowLogsModal] = useState(false);
const [logsData, setLogsData] = useState(null);
- const [showUpdateModal, setShowUpdateModal] = useState(false);
const [expandedUpdates, setExpandedUpdates] = useState>(new Set());
const [selectedUpdates, setSelectedUpdates] = useState([]);
@@ -319,14 +316,8 @@ export function AgentUpdatesEnhanced({ agentId }: AgentUpdatesEnhancedProps) {
)}
- {/* Update Agent Button */}
- setShowUpdateModal(true)}
- className="text-sm text-primary-600 hover:text-primary-800 flex items-center space-x-1 border border-primary-300 px-2 py-1 rounded"
- >
-
- Update Agent
-
+ {/* Header-only view for Update packages - no agent update button here */}
+ {/* Users should use Agent Health page for agent updates */}
{/* Search and Filters */}
@@ -571,17 +562,6 @@ export function AgentUpdatesEnhanced({ agentId }: AgentUpdatesEnhancedProps) {
)}
-
- {/* Agent Update Modal */}
-
setShowUpdateModal(false)}
- selectedAgentIds={[agentId]}
- onAgentsUpdated={() => {
- setShowUpdateModal(false);
- queryClient.invalidateQueries({ queryKey: ['agents'] });
- }}
- />
);
}
diff --git a/aggregator-web/src/components/security/SecurityCategorySection.tsx b/aggregator-web/src/components/security/SecurityCategorySection.tsx
new file mode 100644
index 0000000..c1e409e
--- /dev/null
+++ b/aggregator-web/src/components/security/SecurityCategorySection.tsx
@@ -0,0 +1,152 @@
+import React, { useState } from 'react';
+import { AlertTriangle, Info, Lock, Shield, CheckCircle } from 'lucide-react';
+import { SecurityCategorySectionProps, SecuritySetting } from '@/types/security';
+import SecuritySetting from './SecuritySetting';
+
+const SecurityCategorySection: React.FC
= ({
+ title,
+ description,
+ settings,
+ onSettingChange,
+ disabled = false,
+ loading = false,
+ error = null,
+}) => {
+ const [expandedInfo, setExpandedInfo] = useState(null);
+
+ // Group settings by type for better organization
+ const groupedSettings = settings.reduce((acc, setting) => {
+ const group = setting.type === 'toggle' ? 'main' : 'advanced';
+ if (!acc[group]) acc[group] = [];
+ acc[group].push(setting);
+ return acc;
+ }, {} as Record);
+
+ const isSectionEnabled = settings.find(s => s.key === 'enabled')?.value ?? true;
+
+ return (
+
+ {/* Header */}
+
+
+
+
{title}
+ {isSectionEnabled ? (
+
+ ) : (
+
+ )}
+
+
{description}
+
+ {error && (
+
+ )}
+
+
+ {/* Loading State */}
+ {loading && (
+
+ )}
+
+ {/* Settings Grid */}
+ {!loading && (
+
+ {/* Main Settings (Toggles) */}
+ {groupedSettings.main && groupedSettings.main.length > 0 && (
+
+ {groupedSettings.main.map((setting) => (
+
+
onSettingChange(setting.key, value)}
+ disabled={disabled || setting.disabled}
+ error={null}
+ />
+ {setting.description && (
+
+
{setting.description}
+ {setting.key === 'enabled' && !setting.value && (
+
+
+
+
+ Disabling this feature may reduce system security
+
+
+
+ )}
+
+ )}
+
+ ))}
+
+ )}
+
+ {/* Advanced Settings */}
+ {groupedSettings.advanced && groupedSettings.advanced.length > 0 && (
+
+
+
+
Advanced Configuration
+
+
+
+ {groupedSettings.advanced.map((setting) => (
+
+
onSettingChange(setting.key, value)}
+ disabled={disabled || setting.disabled || !isSectionEnabled}
+ error={null}
+ />
+ {setting.description && (
+
+
+
{setting.description}
+
+ )}
+
+ ))}
+
+
+ )}
+
+ )}
+
+ {/* Section Footer Info */}
+ {!loading && !error && (
+
+
+
+
+
+ {isSectionEnabled ? 'Feature is active' : 'Feature is disabled'}
+
+
+
+ {settings.length} settings
+ {settings.filter(s => s.disabled).length > 0 && (
+
+ {settings.filter(s => s.disabled).length} disabled
+
+ )}
+
+
+
+ )}
+
+ );
+};
+
+export default SecurityCategorySection;
\ No newline at end of file
diff --git a/aggregator-web/src/components/security/SecurityEvents.tsx b/aggregator-web/src/components/security/SecurityEvents.tsx
new file mode 100644
index 0000000..bf8ec51
--- /dev/null
+++ b/aggregator-web/src/components/security/SecurityEvents.tsx
@@ -0,0 +1,590 @@
+import React, { useState, useEffect } from 'react';
+import {
+ Activity,
+ AlertTriangle,
+ CheckCircle,
+ XCircle,
+ Download,
+ Filter,
+ Search,
+ RefreshCw,
+ Pause,
+ Play,
+ ChevronDown,
+ Eye,
+ Copy,
+ Calendar,
+ Server,
+ User,
+ Tag,
+ FileText,
+ Info
+} from 'lucide-react';
+import { useSecurityEvents, useSecurityWebSocket } from '@/hooks/useSecuritySettings';
+import { SecurityEvent, EventFilters, SecurityEventsProps } from '@/types/security';
+
+const SecurityEvents: React.FC = () => {
+ const [filters, setFilters] = useState({});
+ const [selectedEvent, setSelectedEvent] = useState(null);
+ const [showFilterPanel, setShowFilterPanel] = useState(false);
+ const [searchTerm, setSearchTerm] = useState('');
+ const [currentPage, setCurrentPage] = useState(1);
+ const pageSize = 20;
+
+ // Fetch events
+ const { data: eventsData, loading, error, refetch } = useSecurityEvents(
+ currentPage,
+ pageSize,
+ filters
+ );
+
+ // WebSocket for real-time updates
+ const { events: liveEvents, connected, clearEvents } = useSecurityWebSocket();
+ const [liveUpdates, setLiveUpdates] = useState(true);
+
+ // Combine live events with paginated events
+ const allEvents = React.useMemo(() => {
+ const staticEvents = eventsData?.events || [];
+ if (liveUpdates && liveEvents.length > 0) {
+ // Merge live events, avoiding duplicates
+ const existingIds = new Set(staticEvents.map(e => e.id));
+ const newLiveEvents = liveEvents.filter(e => !existingIds.has(e.id));
+ return [...newLiveEvents, ...staticEvents].slice(0, pageSize);
+ }
+ return staticEvents;
+ }, [eventsData, liveEvents, liveUpdates, pageSize]);
+
+ // Severity color mapping
+ const getSeverityColor = (severity: string) => {
+ switch (severity) {
+ case 'critical':
+ return 'text-red-600 bg-red-50 border-red-200';
+ case 'error':
+ return 'text-red-600 bg-red-50 border-red-200';
+ case 'warn':
+ return 'text-yellow-600 bg-yellow-50 border-yellow-200';
+ case 'info':
+ return 'text-blue-600 bg-blue-50 border-blue-200';
+ default:
+ return 'text-gray-600 bg-gray-50 border-gray-200';
+ }
+ };
+
+ const getSeverityIcon = (severity: string) => {
+ switch (severity) {
+ case 'critical':
+ case 'error':
+ return ;
+ case 'warn':
+ return ;
+ case 'info':
+ return ;
+ default:
+ return ;
+ }
+ };
+
+ // Format timestamp
+ const formatTimestamp = (timestamp: string) => {
+ const date = new Date(timestamp);
+ return date.toLocaleString();
+ };
+
+ // Copy event details to clipboard
+ const copyEventDetails = (event: SecurityEvent) => {
+ const details = JSON.stringify(event, null, 2);
+ navigator.clipboard.writeText(details);
+ };
+
+ // Export events
+ const exportEvents = async (format: 'json' | 'csv') => {
+ // Implementation would call API to export events
+ console.log(`Exporting events as ${format}`);
+ };
+
+ // Clear filters
+ const clearFilters = () => {
+ setFilters({});
+ setSearchTerm('');
+ setCurrentPage(1);
+ };
+
+ // Apply filters
+ const applyFilters = (newFilters: EventFilters) => {
+ setFilters(newFilters);
+ setCurrentPage(1);
+ setShowFilterPanel(false);
+ };
+
+ return (
+
+ {/* Header */}
+
+
+
+
Security Events
+
+ {connected ? (
+
+ ) : (
+
+ )}
+
+
+
+
+
setLiveUpdates(!liveUpdates)}
+ className={`flex items-center gap-2 px-3 py-2 text-sm rounded-lg border ${
+ liveUpdates
+ ? 'bg-green-50 text-green-700 border-green-200'
+ : 'bg-gray-50 text-gray-700 border-gray-200'
+ }`}
+ >
+ {liveUpdates ? : }
+ {liveUpdates ? 'Pause Updates' : 'Resume Updates'}
+
+
+
setShowFilterPanel(!showFilterPanel)}
+ className={`flex items-center gap-2 px-3 py-2 text-sm rounded-lg border ${
+ Object.keys(filters).length > 0
+ ? 'bg-blue-50 text-blue-700 border-blue-200'
+ : 'bg-gray-50 text-gray-700 border-gray-200'
+ }`}
+ >
+
+ Filters
+ {Object.keys(filters).length > 0 && (
+
+ {Object.keys(filters).length}
+
+ )}
+
+
+
+
+
+ Export
+
+
+
+ exportEvents('json')}
+ className="block w-full text-left px-3 py-2 text-sm hover:bg-gray-50 rounded-t-lg"
+ >
+ Export as JSON
+
+ exportEvents('csv')}
+ className="block w-full text-left px-3 py-2 text-sm hover:bg-gray-50 rounded-b-lg"
+ >
+ Export as CSV
+
+
+
+
+
refetch()}
+ disabled={loading}
+ className="flex items-center gap-2 px-3 py-2 text-sm rounded-lg border border-gray-200 bg-gray-50 text-gray-700 hover:bg-gray-100 disabled:opacity-50"
+ >
+
+ Refresh
+
+
+
+
+ {/* Search Bar */}
+
+
+ {
+ setSearchTerm(e.target.value);
+ if (e.target.value) {
+ applyFilters({ ...filters, search: e.target.value });
+ } else {
+ const newFilters = { ...filters };
+ delete newFilters.search;
+ applyFilters(newFilters);
+ }
+ }}
+ className="w-full pl-10 pr-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500"
+ />
+
+
+
+ {/* Filter Panel */}
+ {showFilterPanel && (
+
+
Filter Events
+
+
+ {/* Severity Filter */}
+
+
+ Severity
+
+
+ {['critical', 'error', 'warn', 'info'].map((severity) => (
+
+ {
+ const current = filters.severity || [];
+ if (e.target.checked) {
+ applyFilters({
+ ...filters,
+ severity: [...current, severity],
+ });
+ } else {
+ applyFilters({
+ ...filters,
+ severity: current.filter(s => s !== severity),
+ });
+ }
+ }}
+ className="h-4 w-4 text-blue-600 border-gray-300 rounded focus:ring-blue-500"
+ />
+ {severity}
+
+ ))}
+
+
+
+ {/* Category Filter */}
+
+
+ Category
+
+
+ {[
+ 'command_signing',
+ 'update_security',
+ 'machine_binding',
+ 'key_management',
+ 'authentication',
+ ].map((category) => (
+
+ {
+ const current = filters.category || [];
+ if (e.target.checked) {
+ applyFilters({
+ ...filters,
+ category: [...current, category],
+ });
+ } else {
+ applyFilters({
+ ...filters,
+ category: current.filter(c => c !== category),
+ });
+ }
+ }}
+ className="h-4 w-4 text-blue-600 border-gray-300 rounded focus:ring-blue-500"
+ />
+
+ {category.replace('_', ' ').replace(/\b\w/g, l => l.toUpperCase())}
+
+
+ ))}
+
+
+
+ {/* Date Range Filter */}
+
+
+ {/* Agent/User Filter */}
+
+
+ Agent / User
+
+ {
+ applyFilters({
+ ...filters,
+ agent_id: e.target.value || undefined,
+ user_id: e.target.value || undefined,
+ });
+ }}
+ className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
+ />
+
+
+
+
+
+ Clear Filters
+
+ setShowFilterPanel(false)}
+ className="px-4 py-2 text-sm text-white bg-blue-600 rounded-lg hover:bg-blue-700"
+ >
+ Apply Filters
+
+
+
+ )}
+
+ {/* Events List */}
+
+ {loading && allEvents.length === 0 ? (
+
+ ) : error ? (
+
+
+
Failed to load security events
+
refetch()}
+ className="mt-2 text-blue-600 hover:text-blue-800"
+ >
+ Try again
+
+
+ ) : allEvents.length === 0 ? (
+
+
+
No security events found
+ {Object.keys(filters).length > 0 && (
+
+ Clear filters
+
+ )}
+
+ ) : (
+
+ {allEvents.map((event) => (
+
setSelectedEvent(event)}
+ >
+
+
+ {getSeverityIcon(event.severity)}
+
+
+
+
+
+
+ {event.event_type.replace('_', ' ').replace(/\b\w/g, l => l.toUpperCase())}
+
+
{event.message}
+
+
+ {formatTimestamp(event.timestamp)}
+
+
+
+
+
+
+ {event.category.replace('_', ' ')}
+
+ {event.agent_id && (
+
+
+ {event.agent_id}
+
+ )}
+ {event.user_id && (
+
+
+ {event.user_id}
+
+ )}
+ {event.trace_id && (
+
+
+ {event.trace_id.substring(0, 8)}...
+
+ )}
+
+
+
+
+ ))}
+
+ )}
+
+ {/* Pagination */}
+ {eventsData && eventsData.total > pageSize && (
+
+
+ Showing {(currentPage - 1) * pageSize + 1} to{' '}
+ {Math.min(currentPage * pageSize, eventsData.total)} of {eventsData.total} events
+
+
+ setCurrentPage(Math.max(1, currentPage - 1))}
+ disabled={currentPage === 1}
+ className="px-3 py-1 text-sm border rounded-lg disabled:opacity-50"
+ >
+ Previous
+
+ setCurrentPage(currentPage + 1)}
+ disabled={currentPage * pageSize >= eventsData.total}
+ className="px-3 py-1 text-sm border rounded-lg disabled:opacity-50"
+ >
+ Next
+
+
+
+ )}
+
+
+ {/* Event Detail Modal */}
+ {selectedEvent && (
+
setSelectedEvent(null)}
+ >
+
e.stopPropagation()}
+ >
+
+
+
Event Details
+ setSelectedEvent(null)}
+ className="text-gray-400 hover:text-gray-600"
+ >
+
+
+
+
+
+ {/* Event Header */}
+
+
+ {getSeverityIcon(selectedEvent.severity)}
+
+
+
+ {selectedEvent.event_type.replace('_', ' ').replace(/\b\w/g, l => l.toUpperCase())}
+
+
{selectedEvent.message}
+
+ {formatTimestamp(selectedEvent.timestamp)}
+
+
+
+
+ {/* Event Information */}
+
+
+
Severity
+
{selectedEvent.severity}
+
+
+
Category
+
{selectedEvent.category.replace('_', ' ')}
+
+ {selectedEvent.agent_id && (
+
+
Agent ID
+
{selectedEvent.agent_id}
+
+ )}
+ {selectedEvent.user_id && (
+
+
User ID
+
{selectedEvent.user_id}
+
+ )}
+ {selectedEvent.trace_id && (
+
+
Trace ID
+
{selectedEvent.trace_id}
+
+ )}
+
+
+ {/* Event Details */}
+ {Object.keys(selectedEvent.details).length > 0 && (
+
+
+
Additional Details
+
copyEventDetails(selectedEvent)}
+ className="flex items-center gap-1 text-xs text-blue-600 hover:text-blue-800"
+ >
+
+ Copy
+
+
+
+ {JSON.stringify(selectedEvent.details, null, 2)}
+
+
+ )}
+
+
+
+
+ )}
+
+ );
+};
+
+export default SecurityEvents;
\ No newline at end of file
diff --git a/aggregator-web/src/components/security/SecuritySetting.tsx b/aggregator-web/src/components/security/SecuritySetting.tsx
new file mode 100644
index 0000000..f8f355c
--- /dev/null
+++ b/aggregator-web/src/components/security/SecuritySetting.tsx
@@ -0,0 +1,334 @@
+import React, { useState, useEffect } from 'react';
+import { Check, X, Eye, EyeOff, AlertTriangle } from 'lucide-react';
+import { SecuritySettingProps, SecuritySetting } from '@/types/security';
+
+const SecuritySetting: React.FC = ({
+ setting,
+ onChange,
+ disabled = false,
+ error = null,
+}) => {
+ const [localValue, setLocalValue] = useState(setting.value);
+ const [showValue, setShowValue] = useState(!setting.sensitive);
+ const [isValid, setIsValid] = useState(true);
+
+ // Validate input on change
+ useEffect(() => {
+ if (setting.validation && typeof setting.validation === 'function') {
+ const validationError = setting.validation(localValue);
+ setIsValid(!validationError);
+ } else {
+ // Built-in validations
+ if (setting.type === 'number' || setting.type === 'slider') {
+ const num = Number(localValue);
+ if (setting.min !== undefined && num < setting.min) setIsValid(false);
+ else if (setting.max !== undefined && num > setting.max) setIsValid(false);
+ else setIsValid(true);
+ }
+ }
+ }, [localValue, setting]);
+
+ // Handle value change
+ const handleChange = (value: any) => {
+ setLocalValue(value);
+
+ // For immediate updates (toggles), call onChange right away
+ if (setting.type === 'toggle' || setting.type === 'checkbox') {
+ onChange(value);
+ }
+ };
+
+ // Handle blur for text-like inputs
+ const handleBlur = () => {
+ if (setting.type === 'toggle' || setting.type === 'checkbox') return;
+
+ if (isValid && localValue !== setting.value) {
+ onChange(localValue);
+ } else if (!isValid) {
+ // Revert to original value on invalid
+ setLocalValue(setting.value);
+ }
+ };
+
+ // Render toggle switch
+ const renderToggle = () => {
+ const isEnabled = Boolean(localValue);
+
+ return (
+ handleChange(!isEnabled)}
+ disabled={disabled}
+ className={`
+ relative inline-flex h-6 w-11 flex-shrink-0 cursor-pointer rounded-full border-2 border-transparent
+ transition-colors duration-200 ease-in-out focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-offset-2
+ ${disabled ? 'opacity-50 cursor-not-allowed' : ''}
+ ${isEnabled ? 'bg-blue-600' : 'bg-gray-200'}
+ `}
+ >
+
+
+ );
+ };
+
+ // Render select dropdown
+ const renderSelect = () => (
+ handleChange(e.target.value)}
+ disabled={disabled}
+ onBlur={handleBlur}
+ className={`
+ w-full px-3 py-2 border rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500
+ ${disabled ? 'bg-gray-100 cursor-not-allowed' : 'bg-white'}
+ ${error ? 'border-red-300' : 'border-gray-300'}
+ `}
+ >
+ {setting.options?.map((option) => (
+
+ {option.charAt(0).toUpperCase() + option.slice(1).replace(/_/g, ' ')}
+
+ ))}
+
+ );
+
+ // Render number input
+ const renderNumber = () => (
+ handleChange(Number(e.target.value))}
+ disabled={disabled}
+ onBlur={handleBlur}
+ min={setting.min}
+ max={setting.max}
+ step={setting.step}
+ className={`
+ w-full px-3 py-2 border rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500
+ ${disabled ? 'bg-gray-100 cursor-not-allowed' : 'bg-white'}
+ ${error ? 'border-red-300' : isValid ? 'border-gray-300' : 'border-red-300'}
+ `}
+ />
+ );
+
+ // Render text input
+ const renderText = () => (
+
+ handleChange(e.target.value)}
+ disabled={disabled}
+ onBlur={handleBlur}
+ className={`
+ w-full px-3 py-2 pr-10 border rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500
+ ${disabled ? 'bg-gray-100 cursor-not-allowed' : 'bg-white'}
+ ${error ? 'border-red-300' : isValid ? 'border-gray-300' : 'border-red-300'}
+ `}
+ />
+ {setting.sensitive && (
+ setShowValue(!showValue)}
+ className="absolute right-2 top-1/2 transform -translate-y-1/2 text-gray-400 hover:text-gray-600"
+ >
+ {showValue ? : }
+
+ )}
+
+ );
+
+ // Render slider
+ const renderSlider = () => {
+ const min = setting.min || 0;
+ const max = setting.max || 100;
+ const percentage = ((Number(localValue) - min) / (max - min)) * 100;
+
+ return (
+
+
+ {min}
+ {localValue}
+ {max}
+
+
handleChange(Number(e.target.value))}
+ onMouseUp={handleBlur}
+ disabled={disabled}
+ className={`
+ w-full h-2 bg-gray-200 rounded-lg appearance-none cursor-pointer
+ ${disabled ? 'opacity-50 cursor-not-allowed' : ''}
+ [&::-webkit-slider-thumb]:appearance-none
+ [&::-webkit-slider-thumb]:w-4
+ [&::-webkit-slider-thumb]:h-4
+ [&::-webkit-slider-thumb]:rounded-full
+ [&::-webkit-slider-thumb]:bg-blue-600
+ [&::-webkit-slider-thumb]:cursor-pointer
+ [&::-moz-range-thumb]:w-4
+ [&::-moz-range-thumb]:h-4
+ [&::-moz-range-thumb]:rounded-full
+ [&::-moz-range-thumb]:bg-blue-600
+ [&::-moz-range-thumb]:cursor-pointer
+ [&::-moz-range-thumb]:border-0
+ `}
+ style={{
+ background: `linear-gradient(to right, #3B82F6 0%, #3B82F6 ${percentage}%, #E5E7EB ${percentage}%, #E5E7EB 100%)`
+ }}
+ />
+ {setting.step && (
+
+ Step: {setting.step} {setting.min && setting.max && `(${setting.min} - ${setting.max})`}
+
+ )}
+
+ );
+ };
+
+ // Render checkbox group
+ const renderCheckboxGroup = () => {
+ const options = setting.options as Array<{ label: string; value: string }>;
+
+ return (
+
+ {options.map((option) => (
+
+ {
+ const newValue = {
+ ...localValue,
+ [option.value]: e.target.checked,
+ };
+ handleChange(newValue);
+ }}
+ disabled={disabled}
+ className="h-4 w-4 text-blue-600 border-gray-300 rounded focus:ring-blue-500"
+ />
+ {option.label}
+
+ ))}
+
+ );
+ };
+
+ // Render JSON editor
+ const renderJSON = () => {
+ const [tempValue, setTempValue] = useState(JSON.stringify(localValue, null, 2));
+ const [jsonError, setJsonError] = useState(null);
+
+ useEffect(() => {
+ setTempValue(JSON.stringify(localValue, null, 2));
+ }, [localValue]);
+
+ const validateJSON = (value: string) => {
+ try {
+ const parsed = JSON.parse(value);
+ setJsonError(null);
+ handleChange(parsed);
+ } catch (e) {
+ setJsonError('Invalid JSON format');
+ }
+ };
+
+ return (
+
+ );
+ };
+
+ // Render based on setting type
+ const renderControl = () => {
+ switch (setting.type) {
+ case 'toggle':
+ return renderToggle();
+ case 'select':
+ return renderSelect();
+ case 'number':
+ return renderNumber();
+ case 'text':
+ return renderText();
+ case 'slider':
+ return renderSlider();
+ case 'checkbox-group':
+ return renderCheckboxGroup();
+ case 'json':
+ return renderJSON();
+ default:
+ return Unknown setting type ;
+ }
+ };
+
+ return (
+
+
+ {setting.label}
+ {setting.required && * }
+
+
+ {renderControl()}
+
+ {/* Validation Status */}
+ {localValue !== setting.value && isValid && (
+
+
+ Changed
+
+ )}
+
+ {!isValid && (
+
+
+ Invalid value
+
+ )}
+
+ {/* Error message */}
+ {error && (
+
+ )}
+
+ );
+};
+
+export default SecuritySetting;
\ No newline at end of file
diff --git a/aggregator-web/src/components/security/SecurityStatusCard.tsx b/aggregator-web/src/components/security/SecurityStatusCard.tsx
new file mode 100644
index 0000000..53ea6b2
--- /dev/null
+++ b/aggregator-web/src/components/security/SecurityStatusCard.tsx
@@ -0,0 +1,231 @@
+import React from 'react';
+import {
+ Shield,
+ ShieldCheck,
+ AlertTriangle,
+ XCircle,
+ RefreshCw,
+ CheckCircle,
+ Clock,
+ Activity,
+ Eye,
+ Info
+} from 'lucide-react';
+import { SecurityStatusCardProps } from '@/types/security';
+
+const SecurityStatusCard: React.FC = ({
+ status,
+ onRefresh,
+ loading = false,
+}) => {
+ const getStatusIcon = () => {
+ switch (status.overall) {
+ case 'healthy':
+ return ;
+ case 'warning':
+ return ;
+ case 'critical':
+ return ;
+ default:
+ return ;
+ }
+ };
+
+ const getStatusColor = () => {
+ switch (status.overall) {
+ case 'healthy':
+ return 'bg-green-50 border-green-200 text-green-900';
+ case 'warning':
+ return 'bg-yellow-50 border-yellow-200 text-yellow-900';
+ case 'critical':
+ return 'bg-red-50 border-red-200 text-red-900';
+ default:
+ return 'bg-gray-50 border-gray-200 text-gray-900';
+ }
+ };
+
+ const getStatusText = () => {
+ switch (status.overall) {
+ case 'healthy':
+ return 'All security features are operating normally';
+ case 'warning':
+ return 'Some security features require attention';
+ case 'critical':
+ return 'Critical security issues detected';
+ default:
+ return 'Security status unknown';
+ }
+ };
+
+ const formatLastUpdate = (timestamp: string) => {
+ const date = new Date(timestamp);
+ const now = new Date();
+ const diff = now.getTime() - date.getTime();
+ const minutes = Math.floor(diff / 60000);
+
+ if (minutes < 1) return 'Just now';
+ if (minutes < 60) return `${minutes} minute${minutes !== 1 ? 's' : ''} ago`;
+ if (minutes < 1440) return `${Math.floor(minutes / 60)} hour${Math.floor(minutes / 60) !== 1 ? 's' : ''} ago`;
+ return date.toLocaleDateString();
+ };
+
+ return (
+
+ {/* Main Status Card */}
+
+
+
+
+ {getStatusIcon()}
+
+
+
Security Overview
+
{getStatusText()}
+
+
+
+
+ {loading ? 'Updating...' : 'Refresh'}
+
+
+
+ {/* Feature Status Grid */}
+
+ {status.features.map((feature) => (
+
+
+
+ {feature.name}
+
+ {feature.enabled ? (
+
+
+
+ {feature.status === 'healthy' ? 'Active' : feature.status}
+
+
+ ) : (
+
+
+ Disabled
+
+ )}
+
+ {feature.details && (
+
{feature.details}
+ )}
+
+
+ {formatLastUpdate(feature.last_check)}
+
+
+ ))}
+
+
+
+ {/* Recent Events Summary */}
+
+
+
+
+
Recent Events
+
{status.recent_events}
+
Last 24 hours
+
+
+
+
+
+
+
+
+
Enabled Features
+
+ {status.features.filter(f => f.enabled).length}/{status.features.length}
+
+
Active
+
+
+
+
+
+
+
+
+
Last Updated
+
+ {formatLastUpdate(status.last_updated)}
+
+
+
+
+
+
+
+ {/* Alert Section */}
+ {status.overall !== 'healthy' && (
+
+
+ {status.overall === 'warning' ? (
+
+ ) : (
+
+ )}
+
+
+ {status.overall === 'warning' ? 'Security Warnings' : 'Security Alert'}
+
+
+ {status.features
+ .filter(f => f.status !== 'healthy' && f.enabled)
+ .map((feature, index) => (
+
+
+ {feature.name}: {feature.details || feature.status}
+
+ ))}
+
+
+
+
+ )}
+
+ {/* Quick Actions */}
+
+
+
+ Quick Actions
+
+
+
+
+ View Security Logs
+
+
+
+ Run Security Check
+
+
+
+ Monitor Events
+
+
+
+
+ );
+};
+
+export default SecurityStatusCard;
\ No newline at end of file
diff --git a/aggregator-web/src/hooks/useAgentEvents.ts b/aggregator-web/src/hooks/useAgentEvents.ts
new file mode 100644
index 0000000..470a0d6
--- /dev/null
+++ b/aggregator-web/src/hooks/useAgentEvents.ts
@@ -0,0 +1,87 @@
+import { useEffect } from 'react';
+import { useQuery } from '@tanstack/react-query';
+import api from '@/lib/api';
+import { useRealtimeStore } from '@/lib/store';
+
+// SystemEvent interface matching the backend model
+interface SystemEvent {
+ id: string;
+ agent_id: string;
+ event_type: string;
+ event_subtype: string;
+ severity: 'info' | 'warning' | 'error' | 'critical';
+ component: string;
+ message: string;
+ metadata?: Record;
+ created_at: string; // ISO timestamp string
+}
+
+export interface UseAgentEventsOptions {
+ severity?: string; // comma-separated: error,critical,warning,info
+ limit?: number; // default 50, max 1000
+ pollingInterval?: number; // milliseconds, default 30000 (30s)
+}
+
+export const useAgentEvents = (
+ agentId: string | null | undefined,
+ options: UseAgentEventsOptions = {}
+) => {
+ const { addNotification } = useRealtimeStore();
+ const {
+ severity = 'error,critical,warning',
+ limit = 50,
+ pollingInterval = 30000,
+ } = options;
+
+ const { data, isLoading, error, refetch } = useQuery({
+ queryKey: ['agent-events', agentId, severity, limit],
+ queryFn: async () => {
+ if (!agentId) {
+ return { events: [] as SystemEvent[], total: 0 };
+ }
+
+ const params = new URLSearchParams();
+ if (severity) params.append('severity', severity);
+ if (limit) params.append('limit', limit.toString());
+
+ const response = await api.get(
+ `/agents/${agentId}/events?${params.toString()}`
+ );
+ return response.data as { events: SystemEvent[]; total: number };
+ },
+ enabled: !!agentId,
+ refetchInterval: pollingInterval,
+ staleTime: pollingInterval / 2, // Consider data stale after half the polling interval
+ });
+
+ useEffect(() => {
+ if (data?.events && data.events.length > 0) {
+ // Map system events to notification format and add to notification store
+ data.events.forEach((event) => {
+ // Map severity to notification type
+ const type =
+ event.severity === 'critical'
+ ? 'error'
+ : event.severity === 'error'
+ ? 'error'
+ : event.severity === 'warning'
+ ? 'warning'
+ : 'info';
+
+ addNotification({
+ type,
+ title: `${event.component}: ${event.event_type}`,
+ message: event.message,
+ });
+ });
+ }
+ }, [data?.events, addNotification]);
+
+ return {
+ events: data?.events ?? [],
+ total: data?.total ?? 0,
+ isLoading,
+ error,
+ refetch,
+ };
+};
\ No newline at end of file
diff --git a/aggregator-web/src/hooks/useSecuritySettings.ts b/aggregator-web/src/hooks/useSecuritySettings.ts
new file mode 100644
index 0000000..08a89d6
--- /dev/null
+++ b/aggregator-web/src/hooks/useSecuritySettings.ts
@@ -0,0 +1,490 @@
+import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
+import { api, securityApi } from '@/lib/api';
+import {
+ SecuritySettings,
+ SecuritySettingsResponse,
+ SecurityEventsResponse,
+ SecurityEvent,
+ AuditEntry,
+ SecurityAuditResponse,
+ EventFilters,
+ SecuritySettingsState,
+ KeyRotationRequest,
+ KeyRotationResponse,
+ MachineFingerprint
+} from '@/types/security';
+
+// Default security settings
+const defaultSecuritySettings: SecuritySettings = {
+ command_signing: {
+ enabled: true,
+ enforcement_mode: 'strict',
+ algorithm: 'ed25519',
+ },
+ update_security: {
+ enabled: true,
+ enforcement_mode: 'strict',
+ nonce_timeout_seconds: 300,
+ require_signature_verification: true,
+ allowed_algorithms: ['ed25519', 'rsa-2048', 'ecdsa-p256'],
+ },
+ machine_binding: {
+ enabled: true,
+ enforcement_mode: 'strict',
+ binding_components: {
+ hardware_id: true,
+ bios_uuid: true,
+ mac_addresses: true,
+ cpu_id: false,
+ disk_serial: false,
+ },
+ violation_action: 'block',
+ binding_grace_period_minutes: 5,
+ },
+ logging: {
+ log_level: 'info',
+ retention_days: 30,
+ log_failures: true,
+ log_successes: false,
+ log_to_file: true,
+ log_to_console: true,
+ export_format: 'json',
+ },
+ key_management: {
+ current_key: {
+ key_id: '',
+ algorithm: 'ed25519',
+ created_at: '',
+ fingerprint: '',
+ },
+ auto_rotation: false,
+ rotation_interval_days: 90,
+ grace_period_days: 7,
+ key_history: [],
+ },
+};
+
+// API calls
+const fetchSecuritySettings = async (): Promise => {
+ try {
+ const response = await api.get('/security/settings');
+ return response.data.settings || defaultSecuritySettings;
+ } catch (error) {
+ // Return defaults if API fails
+ console.warn('Failed to fetch security settings, using defaults:', error);
+ return defaultSecuritySettings;
+ }
+};
+
+const updateSecuritySetting = async (category: string, key: string, value: any): Promise => {
+ await api.put(`/security/settings/${category}/${key}`, { value });
+};
+
+const updateSecuritySettings = async (settings: Partial): Promise => {
+ const response = await api.put('/security/settings', { settings });
+ return response.data;
+};
+
+const fetchSecurityAudit = async (page: number = 1, pageSize: number = 20): Promise => {
+ const response = await api.get('/security/settings/audit', {
+ params: { page, page_size: pageSize }
+ });
+ return response.data;
+};
+
+const fetchSecurityEvents = async (
+ page: number = 1,
+ pageSize: number = 20,
+ filters?: EventFilters
+): Promise => {
+ const params: any = { page, page_size: pageSize };
+
+ if (filters) {
+ if (filters.severity?.length) params.severity = filters.severity.join(',');
+ if (filters.category?.length) params.category = filters.category.join(',');
+ if (filters.date_range) {
+ params.start_date = filters.date_range.start;
+ params.end_date = filters.date_range.end;
+ }
+ if (filters.agent_id) params.agent_id = filters.agent_id;
+ if (filters.user_id) params.user_id = filters.user_id;
+ if (filters.search) params.search = filters.search;
+ }
+
+ const response = await api.get('/security/events', { params });
+ return response.data;
+};
+
+const rotateKey = async (request: KeyRotationRequest): Promise => {
+ const response = await api.post('/security/keys/rotate', request);
+ return response.data;
+};
+
+const getMachineFingerprint = async (agentId: string): Promise => {
+ const response = await api.get(`/security/machine-binding/fingerprint/${agentId}`);
+ return response.data;
+};
+
+const exportSecuritySettings = async (): Promise => {
+ const response = await api.get('/security/settings/export', {
+ responseType: 'blob',
+ });
+ return response.data;
+};
+
+const importSecuritySettings = async (file: File): Promise => {
+ const formData = new FormData();
+ formData.append('file', file);
+
+ const response = await api.post('/security/settings/import', formData, {
+ headers: {
+ 'Content-Type': 'multipart/form-data',
+ },
+ });
+ return response.data;
+};
+
+// Main hook for security settings
+export const useSecuritySettings = () => {
+ const queryClient = useQueryClient();
+
+ // Fetch security settings
+ const {
+ data: settings = defaultSecuritySettings,
+ isLoading: loadingSettings,
+ error: settingsError,
+ refetch: refetchSettings,
+ } = useQuery({
+ queryKey: ['security', 'settings'],
+ queryFn: fetchSecuritySettings,
+ staleTime: 5 * 60 * 1000, // 5 minutes
+ });
+
+ // Fetch security overview/status
+ const {
+ data: securityOverview,
+ isLoading: loadingOverview,
+ refetch: refetchOverview,
+ } = useQuery({
+ queryKey: ['security', 'overview'],
+ queryFn: () => securityApi.getOverview(),
+ staleTime: 60 * 1000, // 1 minute
+ refetchInterval: 60 * 1000, // Auto-refresh every minute
+ });
+
+ // Update single setting mutation
+ const updateSettingMutation = useMutation({
+ mutationFn: ({ category, key, value }: { category: string; key: string; value: any }) =>
+ updateSecuritySetting(category, key, value),
+ onSuccess: () => {
+ queryClient.invalidateQueries({ queryKey: ['security', 'settings'] });
+ queryClient.invalidateQueries({ queryKey: ['security', 'overview'] });
+ },
+ });
+
+ // Update all settings mutation
+ const updateSettingsMutation = useMutation({
+ mutationFn: updateSecuritySettings,
+ onSuccess: () => {
+ queryClient.invalidateQueries({ queryKey: ['security', 'settings'] });
+ queryClient.invalidateQueries({ queryKey: ['security', 'overview'] });
+ queryClient.invalidateQueries({ queryKey: ['security', 'audit'] });
+ },
+ });
+
+ // Key rotation mutation
+ const rotateKeyMutation = useMutation({
+ mutationFn: rotateKey,
+ onSuccess: () => {
+ queryClient.invalidateQueries({ queryKey: ['security', 'settings'] });
+ queryClient.invalidateQueries({ queryKey: ['security', 'overview'] });
+ },
+ });
+
+ // Export settings mutation
+ const exportSettingsMutation = useMutation({
+ mutationFn: exportSecuritySettings,
+ });
+
+ // Import settings mutation
+ const importSettingsMutation = useMutation({
+ mutationFn: importSecuritySettings,
+ onSuccess: () => {
+ queryClient.invalidateQueries({ queryKey: ['security', 'settings'] });
+ },
+ });
+
+ // Update a single setting
+ const updateSetting = async (category: string, key: string, value: any) => {
+ try {
+ await updateSettingMutation.mutateAsync({ category, key, value });
+ } catch (error) {
+ console.error(`Failed to update ${category}.${key}:`, error);
+ throw error;
+ }
+ };
+
+ // Update multiple settings at once
+ const updateSettings = async (newSettings: Partial) => {
+ try {
+ await updateSettingsMutation.mutateAsync(newSettings);
+ } catch (error) {
+ console.error('Failed to update security settings:', error);
+ throw error;
+ }
+ };
+
+ // Rotate security key
+ const rotateSecurityKey = async (request: KeyRotationRequest) => {
+ try {
+ return await rotateKeyMutation.mutateAsync(request);
+ } catch (error) {
+ console.error('Failed to rotate security key:', error);
+ throw error;
+ }
+ };
+
+ // Export settings to file
+ const exportSettings = async () => {
+ try {
+ const blob = await exportSettingsMutation.mutateAsync();
+ const url = window.URL.createObjectURL(blob);
+ const a = document.createElement('a');
+ a.style.display = 'none';
+ a.href = url;
+ a.download = `redflag-security-settings-${new Date().toISOString().split('T')[0]}.json`;
+ document.body.appendChild(a);
+ a.click();
+ window.URL.revokeObjectURL(url);
+ document.body.removeChild(a);
+ } catch (error) {
+ console.error('Failed to export security settings:', error);
+ throw error;
+ }
+ };
+
+ // Import settings from file
+ const importSettings = async (file: File) => {
+ try {
+ return await importSettingsMutation.mutateAsync(file);
+ } catch (error) {
+ console.error('Failed to import security settings:', error);
+ throw error;
+ }
+ };
+
+ // Reset settings to defaults
+ const resetToDefaults = async () => {
+ try {
+ await updateSettings(defaultSecuritySettings);
+ } catch (error) {
+ console.error('Failed to reset security settings to defaults:', error);
+ throw error;
+ }
+ };
+
+ return {
+ // Data
+ settings,
+ securityOverview,
+
+ // Loading states
+ loading: loadingSettings || loadingOverview,
+ saving: updateSettingMutation.isPending || updateSettingsMutation.isPending,
+
+ // Errors
+ error: settingsError || updateSettingMutation.error || updateSettingsMutation.error,
+
+ // Actions
+ updateSetting,
+ updateSettings,
+ rotateSecurityKey,
+ exportSettings,
+ importSettings,
+ resetToDefaults,
+ refetch: refetchSettings,
+ };
+};
+
+// Hook for security audit trail
+export const useSecurityAudit = (page: number = 1, pageSize: number = 20) => {
+ return useQuery({
+ queryKey: ['security', 'audit', page, pageSize],
+ queryFn: () => fetchSecurityAudit(page, pageSize),
+ staleTime: 2 * 60 * 1000, // 2 minutes
+ });
+};
+
+// Hook for security events
+export const useSecurityEvents = (
+ page: number = 1,
+ pageSize: number = 20,
+ filters?: EventFilters
+) => {
+ return useQuery({
+ queryKey: ['security', 'events', page, pageSize, filters],
+ queryFn: () => fetchSecurityEvents(page, pageSize, filters),
+ staleTime: 30 * 1000, // 30 seconds
+ });
+};
+
+// Hook for machine fingerprint
+export const useMachineFingerprint = (agentId: string) => {
+ return useQuery({
+ queryKey: ['security', 'machine-fingerprint', agentId],
+ queryFn: () => getMachineFingerprint(agentId),
+ enabled: !!agentId,
+ staleTime: 5 * 60 * 1000, // 5 minutes
+ });
+};
+
+// Hook for real-time security events (WebSocket)
+export const useSecurityWebSocket = () => {
+ const [events, setEvents] = React.useState([]);
+ const [connected, setConnected] = React.useState(false);
+ const ws = React.useRef(null);
+
+ React.useEffect(() => {
+ // Initialize WebSocket connection
+ const token = localStorage.getItem('auth_token');
+ const wsUrl = `${window.location.protocol === 'https:' ? 'wss:' : 'ws:'}//${window.location.host}/api/v1/security/ws`;
+
+ ws.current = new WebSocket(wsUrl, [], {
+ headers: {
+ Authorization: `Bearer ${token}`,
+ },
+ });
+
+ ws.current.onopen = () => {
+ setConnected(true);
+ console.log('Security WebSocket connected');
+ };
+
+ ws.current.onmessage = (event) => {
+ try {
+ const message = JSON.parse(event.data);
+ if (message.type === 'security_event') {
+ setEvents(prev => [message.data, ...prev.slice(0, 999)]); // Keep last 1000 events
+ }
+ } catch (error) {
+ console.error('Failed to parse WebSocket message:', error);
+ }
+ };
+
+ ws.current.onerror = (error) => {
+ console.error('Security WebSocket error:', error);
+ setConnected(false);
+ };
+
+ ws.current.onclose = () => {
+ setConnected(false);
+ console.log('Security WebSocket disconnected');
+
+ // Attempt to reconnect after 5 seconds
+ setTimeout(() => {
+ if (!ws.current || ws.current.readyState === WebSocket.CLOSED) {
+ // Re-initialize connection
+ }
+ }, 5000);
+ };
+
+ return () => {
+ if (ws.current) {
+ ws.current.close();
+ }
+ };
+ }, []);
+
+ return {
+ events,
+ connected,
+ clearEvents: () => setEvents([]),
+ };
+};
+
+// Helper hook for form validation
+export const useSecurityValidation = () => {
+ const validateSetting = (key: string, value: any): string | null => {
+ switch (key) {
+ case 'nonce_timeout_seconds':
+ if (value < 60 || value > 3600) {
+ return 'Nonce timeout must be between 60 and 3600 seconds';
+ }
+ break;
+
+ case 'retention_days':
+ if (value < 1 || value > 365) {
+ return 'Retention period must be between 1 and 365 days';
+ }
+ break;
+
+ case 'rotation_interval_days':
+ if (value < 7 || value > 365) {
+ return 'Rotation interval must be between 7 and 365 days';
+ }
+ break;
+
+ case 'binding_grace_period_minutes':
+ if (value < 1 || value > 60) {
+ return 'Grace period must be between 1 and 60 minutes';
+ }
+ break;
+
+ default:
+ return null;
+ }
+
+ return null;
+ };
+
+ const validateAll = (settings: SecuritySettings): Record => {
+ const errors: Record = {};
+
+ // Validate command signing
+ const cmdSigning = settings.command_signing;
+ if (cmdSigning.enabled && !cmdSigning.algorithm) {
+ errors['command_signing.algorithm'] = 'Algorithm is required when command signing is enabled';
+ }
+
+ // Validate update security
+ const updateSec = settings.update_security;
+ if (updateSec.enabled) {
+ const nonceError = validateSetting('nonce_timeout_seconds', updateSec.nonce_timeout_seconds);
+ if (nonceError) errors['update_security.nonce_timeout_seconds'] = nonceError;
+ }
+
+ // Validate machine binding
+ const machineBinding = settings.machine_binding;
+ if (machineBinding.enabled) {
+ const hasAnyComponent = Object.values(machineBinding.binding_components).some(v => v);
+ if (!hasAnyComponent) {
+ errors['machine_binding.binding_components'] = 'At least one binding component must be selected';
+ }
+
+ const graceError = validateSetting('binding_grace_period_minutes', machineBinding.binding_grace_period_minutes);
+ if (graceError) errors['machine_binding.binding_grace_period_minutes'] = graceError;
+ }
+
+ // Validate logging
+ const logging = settings.logging;
+ const retentionError = validateSetting('retention_days', logging.retention_days);
+ if (retentionError) errors['logging.retention_days'] = retentionError;
+
+ // Validate key management
+ const keyMgmt = settings.key_management;
+ if (keyMgmt.auto_rotation) {
+ const rotationError = validateSetting('rotation_interval_days', keyMgmt.rotation_interval_days);
+ if (rotationError) errors['key_management.rotation_interval_days'] = rotationError;
+
+ const graceError = validateSetting('grace_period_days', keyMgmt.grace_period_days);
+ if (graceError) errors['key_management.grace_period_days'] = graceError;
+ }
+
+ return errors;
+ };
+
+ return { validateSetting, validateAll };
+};
+
+// Import React for WebSocket hook
+import React from 'react';
\ No newline at end of file
diff --git a/aggregator-web/src/pages/Agents.tsx b/aggregator-web/src/pages/Agents.tsx
index e9746c4..d7a1075 100644
--- a/aggregator-web/src/pages/Agents.tsx
+++ b/aggregator-web/src/pages/Agents.tsx
@@ -27,14 +27,13 @@ import {
MonitorPlay,
Upload,
} from 'lucide-react';
-import { useAgents, useAgent, useScanAgent, useScanMultipleAgents, useUnregisterAgent } from '@/hooks/useAgents';
+import { useAgents, useAgent, useScanMultipleAgents, useUnregisterAgent } from '@/hooks/useAgents';
import { useAgentUpdate } from '@/hooks/useAgentUpdate';
import { useActiveCommands, useCancelCommand } from '@/hooks/useCommands';
import { useHeartbeatStatus, useInvalidateHeartbeat, useHeartbeatAgentSync } from '@/hooks/useHeartbeat';
import { agentApi } from '@/lib/api';
import { useQueryClient } from '@tanstack/react-query';
import { getStatusColor, formatRelativeTime, isOnline, formatBytes } from '@/lib/utils';
-import { AgentUpdate } from '@/components/AgentUpdate';
import { cn } from '@/lib/utils';
import toast from 'react-hot-toast';
import { AgentSystemUpdates } from '@/components/AgentUpdates';
@@ -62,6 +61,7 @@ const Agents: React.FC = () => {
const [heartbeatLoading, setHeartbeatLoading] = useState(false); // Loading state for heartbeat toggle
const [heartbeatCommandId, setHeartbeatCommandId] = useState(null); // Track specific heartbeat command
const [showUpdateModal, setShowUpdateModal] = useState(false); // Update modal state
+ const [singleAgentUpdate, setSingleAgentUpdate] = useState(null); // Single agent update modal
const dropdownRef = useRef(null);
// Close dropdown when clicking outside
@@ -230,7 +230,6 @@ const Agents: React.FC = () => {
// Fetch single agent if ID is provided
const { data: selectedAgentData } = useAgent(id || '', !!id);
- const scanAgentMutation = useScanAgent();
const scanMultipleMutation = useScanMultipleAgents();
const unregisterAgentMutation = useUnregisterAgent();
@@ -286,16 +285,6 @@ const Agents: React.FC = () => {
}
};
- // Handle scan operations
- const handleScanAgent = async (agentId: string) => {
- try {
- await scanAgentMutation.mutateAsync(agentId);
- toast.success('Scan triggered successfully');
- } catch (error) {
- // Error handling is done in the hook
- }
- };
-
const handleScanSelected = async () => {
if (selectedAgents.length === 0) {
toast.error('Please select at least one agent');
@@ -498,19 +487,6 @@ const Agents: React.FC = () => {
Registered {formatRelativeTime(selectedAgent.created_at)}
-
-
handleScanAgent(selectedAgent.id)}
- disabled={scanAgentMutation.isPending}
- className="btn btn-primary sm:ml-4 w-full sm:w-auto"
- >
- {scanAgentMutation.isPending ? (
-
- ) : (
-
- )}
- Scan Now
-
@@ -838,22 +814,6 @@ const Agents: React.FC = () => {
);
})()
)}
-
- {/* Action Button */}
-
- handleScanAgent(selectedAgent.id)}
- disabled={scanAgentMutation.isPending}
- className="btn btn-primary w-full sm:w-auto text-sm"
- >
- {scanAgentMutation.isPending ? (
-
- ) : (
-
- )}
- Scan Now
-
-
{/* System info */}
@@ -1326,10 +1286,19 @@ const Agents: React.FC = () => {
{agent.current_version || 'Initial Registration'}
{agent.update_available === true && (
-
+ {
+ e.stopPropagation();
+ // Open update modal with this single agent
+ setSingleAgentUpdate(agent.id);
+ setShowUpdateModal(true);
+ }}
+ className="flex items-center text-xs text-amber-600 bg-amber-50 hover:bg-amber-100 px-1.5 py-0.5 rounded-full cursor-pointer transition-colors"
+ title="Click to update agent"
+ >
Update
-
+
)}
{agent.update_available === false && agent.current_version && (
@@ -1373,14 +1342,6 @@ const Agents: React.FC = () => {
-
handleScanAgent(agent.id)}
- disabled={scanAgentMutation.isPending}
- className="text-gray-400 hover:text-primary-600"
- title="Trigger scan"
- >
-
-
{
setSelectedAgents([agent.id]);
@@ -1395,13 +1356,6 @@ const Agents: React.FC = () => {
>
- {/* Agent Update with nonce security */}
-
{
- queryClient.invalidateQueries({ queryKey: ['agents'] });
- }}
- />
handleRemoveAgent(agent.id, agent.hostname)}
disabled={unregisterAgentMutation.isPending}
@@ -1430,12 +1384,17 @@ const Agents: React.FC = () => {
{/* Agent Updates Modal */}
setShowUpdateModal(false)}
- selectedAgentIds={selectedAgents}
+ onClose={() => {
+ setShowUpdateModal(false);
+ setSingleAgentUpdate(null);
+ setSelectedAgents([]);
+ }}
+ selectedAgentIds={singleAgentUpdate ? [singleAgentUpdate] : selectedAgents}
onAgentsUpdated={() => {
// Refresh agents data after update
queryClient.invalidateQueries({ queryKey: ['agents'] });
setSelectedAgents([]);
+ setSingleAgentUpdate(null);
}}
/>
diff --git a/aggregator-web/src/pages/SecuritySettings.tsx b/aggregator-web/src/pages/SecuritySettings.tsx
new file mode 100644
index 0000000..5af103d
--- /dev/null
+++ b/aggregator-web/src/pages/SecuritySettings.tsx
@@ -0,0 +1,738 @@
+import React, { useState, useEffect } from 'react';
+import { useParams, useNavigate, useLocation } from 'react-router-dom';
+import {
+ Shield,
+ Lock,
+ Key,
+ FileText,
+ Settings as SettingsIcon,
+ AlertTriangle,
+ CheckCircle,
+ XCircle,
+ RefreshCw,
+ Download,
+ Upload,
+ Save,
+ RotateCcw,
+ Eye,
+ EyeOff,
+ ChevronRight,
+ Activity,
+ History,
+ Terminal,
+ Server
+} from 'lucide-react';
+
+import { useSecuritySettings, useSecurityValidation } from '@/hooks/useSecuritySettings';
+import { SecuritySettings as SecuritySettingsType, SecuritySetting, ConfirmationDialogState } from '@/types/security';
+import SecurityStatusCard from '@/components/security/SecurityStatusCard';
+import SecurityCategorySection from '@/components/security/SecurityCategorySection';
+import SecurityEvents from '@/components/security/SecurityEvents';
+
+const SecuritySettings: React.FC = () => {
+ const navigate = useNavigate();
+ const location = useLocation();
+ const { tab = 'overview' } = useParams();
+
+ const {
+ settings,
+ securityOverview,
+ loading,
+ saving,
+ error,
+ updateSetting,
+ updateSettings,
+ rotateSecurityKey,
+ exportSettings,
+ importSettings,
+ resetToDefaults,
+ refetch,
+ } = useSecuritySettings();
+
+ const { validateAll } = useSecurityValidation();
+
+ // State management
+ const [activeTab, setActiveTab] = useState(tab);
+ const [localSettings, setLocalSettings] = useState(null);
+ const [hasChanges, setHasChanges] = useState(false);
+ const [validationErrors, setValidationErrors] = useState>({});
+ const [confirmationDialog, setConfirmationDialog] = useState({
+ isOpen: false,
+ title: '',
+ message: '',
+ severity: 'warning',
+ requiresConfirmation: false,
+ onConfirm: () => {},
+ onCancel: () => {},
+ });
+ const [showAdvanced, setShowAdvanced] = useState(false);
+
+ // Sync URL tab with state
+ useEffect(() => {
+ if (tab !== activeTab) {
+ navigate(`/settings/security/${activeTab}`, { replace: true });
+ }
+ }, [activeTab, tab, navigate]);
+
+ // Initialize local settings
+ useEffect(() => {
+ if (settings && !localSettings) {
+ setLocalSettings(JSON.parse(JSON.stringify(settings)));
+ }
+ }, [settings, localSettings]);
+
+ // Validate settings when they change
+ useEffect(() => {
+ if (localSettings) {
+ const errors = validateAll(localSettings);
+ setValidationErrors(errors);
+ }
+ }, [localSettings, validateAll]);
+
+ // Tab configuration
+ const tabs = [
+ { id: 'overview', label: 'Overview', icon: Shield },
+ { id: 'command-signing', label: 'Command Signing', icon: Terminal },
+ { id: 'update-security', label: 'Update Security', icon: Download },
+ { id: 'machine-binding', label: 'Machine Binding', icon: Server },
+ { id: 'logging', label: 'Logging', icon: FileText },
+ { id: 'key-management', label: 'Key Management', icon: Key },
+ { id: 'events', label: 'Security Events', icon: Activity },
+ { id: 'audit', label: 'Audit Trail', icon: History },
+ ];
+
+ // Command Signing Settings
+ const commandSigningSettings: SecuritySetting[] = [
+ {
+ key: 'enabled',
+ label: 'Enable Command Signing',
+ type: 'toggle',
+ value: localSettings?.command_signing?.enabled ?? false,
+ description: 'Cryptographically sign all commands to prevent tampering',
+ },
+ {
+ key: 'enforcement_mode',
+ label: 'Enforcement Mode',
+ type: 'select',
+ value: localSettings?.command_signing?.enforcement_mode ?? 'strict',
+ options: ['strict', 'warning', 'disabled'],
+ description: 'How to handle unsigned commands',
+ disabled: !localSettings?.command_signing?.enabled,
+ },
+ {
+ key: 'algorithm',
+ label: 'Signing Algorithm',
+ type: 'select',
+ value: localSettings?.command_signing?.algorithm ?? 'ed25519',
+ options: ['ed25519', 'rsa-2048', 'ecdsa-p256'],
+ description: 'Cryptographic algorithm for signing commands',
+ disabled: !localSettings?.command_signing?.enabled,
+ },
+ ];
+
+ // Update Security Settings
+ const updateSecuritySettings: SecuritySetting[] = [
+ {
+ key: 'enabled',
+ label: 'Enable Update Security',
+ type: 'toggle',
+ value: localSettings?.update_security?.enabled ?? false,
+ description: 'Require signed updates and nonce validation',
+ },
+ {
+ key: 'enforcement_mode',
+ label: 'Enforcement Mode',
+ type: 'select',
+ value: localSettings?.update_security?.enforcement_mode ?? 'strict',
+ options: ['strict', 'warning', 'disabled'],
+ description: 'How to handle unsigned or invalid updates',
+ disabled: !localSettings?.update_security?.enabled,
+ },
+ {
+ key: 'nonce_timeout_seconds',
+ label: 'Nonce Timeout',
+ type: 'slider',
+ value: localSettings?.update_security?.nonce_timeout_seconds ?? 300,
+ min: 60,
+ max: 3600,
+ step: 60,
+ description: 'How long a nonce is valid (in seconds)',
+ disabled: !localSettings?.update_security?.enabled,
+ },
+ {
+ key: 'require_signature_verification',
+ label: 'Require Signature Verification',
+ type: 'toggle',
+ value: localSettings?.update_security?.require_signature_verification ?? true,
+ description: 'Verify digital signatures on all updates',
+ disabled: !localSettings?.update_security?.enabled,
+ },
+ ];
+
+ // Machine Binding Settings
+ const machineBindingSettings: SecuritySetting[] = [
+ {
+ key: 'enabled',
+ label: 'Enable Machine Binding',
+ type: 'toggle',
+ value: localSettings?.machine_binding?.enabled ?? false,
+ description: 'Bind agents to specific machine fingerprint',
+ },
+ {
+ key: 'enforcement_mode',
+ label: 'Enforcement Mode',
+ type: 'select',
+ value: localSettings?.machine_binding?.enforcement_mode ?? 'strict',
+ options: ['strict', 'warning', 'disabled'],
+ description: 'How to handle machine binding violations',
+ disabled: !localSettings?.machine_binding?.enabled,
+ },
+ {
+ key: 'binding_grace_period_minutes',
+ label: 'Grace Period',
+ type: 'slider',
+ value: localSettings?.machine_binding?.binding_grace_period_minutes ?? 5,
+ min: 1,
+ max: 60,
+ step: 1,
+ description: 'Minutes to allow before enforcing binding',
+ disabled: !localSettings?.machine_binding?.enabled,
+ },
+ {
+ key: 'binding_components',
+ label: 'Binding Components',
+ type: 'checkbox-group',
+ value: localSettings?.machine_binding?.binding_components ?? {},
+ options: [
+ { label: 'Hardware ID', value: 'hardware_id' },
+ { label: 'BIOS UUID', value: 'bios_uuid' },
+ { label: 'MAC Addresses', value: 'mac_addresses' },
+ { label: 'CPU ID', value: 'cpu_id' },
+ { label: 'Disk Serial', value: 'disk_serial' },
+ ],
+ description: 'Machine components to bind against',
+ disabled: !localSettings?.machine_binding?.enabled,
+ },
+ {
+ key: 'violation_action',
+ label: 'Violation Action',
+ type: 'select',
+ value: localSettings?.machine_binding?.violation_action ?? 'block',
+ options: ['block', 'warn', 'log_only'],
+ description: 'Action to take on binding violations',
+ disabled: !localSettings?.machine_binding?.enabled,
+ },
+ ];
+
+ // Logging Settings
+ const loggingSettings: SecuritySetting[] = [
+ {
+ key: 'log_level',
+ label: 'Log Level',
+ type: 'select',
+ value: localSettings?.logging?.log_level ?? 'info',
+ options: ['debug', 'info', 'warn', 'error'],
+ description: 'Minimum severity level to log',
+ },
+ {
+ key: 'retention_days',
+ label: 'Retention Period',
+ type: 'number',
+ value: localSettings?.logging?.retention_days ?? 30,
+ min: 1,
+ max: 365,
+ description: 'Days to retain security logs (1-365)',
+ },
+ {
+ key: 'log_failures',
+ label: 'Log Security Failures',
+ type: 'toggle',
+ value: localSettings?.logging?.log_failures ?? true,
+ description: 'Record all security failures and violations',
+ },
+ {
+ key: 'log_successes',
+ label: 'Log Security Successes',
+ type: 'toggle',
+ value: localSettings?.logging?.log_successes ?? false,
+ description: 'Record successful security operations',
+ },
+ {
+ key: 'export_format',
+ label: 'Export Format',
+ type: 'select',
+ value: localSettings?.logging?.export_format ?? 'json',
+ options: ['json', 'csv', 'syslog'],
+ description: 'Default format for log exports',
+ },
+ ];
+
+ // Key Management Settings
+ const keyManagementSettings: SecuritySetting[] = [
+ {
+ key: 'current_key_info',
+ label: 'Current Key',
+ type: 'text',
+ value: localSettings?.key_management?.current_key?.key_id ?? 'No key configured',
+ description: 'Currently active signing key',
+ disabled: true,
+ },
+ {
+ key: 'auto_rotation',
+ label: 'Auto-Rotation',
+ type: 'toggle',
+ value: localSettings?.key_management?.auto_rotation ?? false,
+ description: 'Automatically rotate signing keys on schedule',
+ },
+ {
+ key: 'rotation_interval_days',
+ label: 'Rotation Interval',
+ type: 'number',
+ value: localSettings?.key_management?.rotation_interval_days ?? 90,
+ min: 7,
+ max: 365,
+ description: 'Days between automatic key rotations',
+ disabled: !localSettings?.key_management?.auto_rotation,
+ },
+ {
+ key: 'grace_period_days',
+ label: 'Grace Period',
+ type: 'number',
+ value: localSettings?.key_management?.grace_period_days ?? 7,
+ min: 1,
+ max: 30,
+ description: 'Days to accept old key after rotation',
+ disabled: !localSettings?.key_management?.auto_rotation,
+ },
+ ];
+
+ // Handle settings change
+ const handleSettingChange = async (category: string, key: string, value: any) => {
+ if (!localSettings) return;
+
+ const newSettings = {
+ ...localSettings,
+ [category]: {
+ ...localSettings[category as keyof SecuritySettingsType],
+ [key]: value,
+ },
+ };
+
+ setLocalSettings(newSettings);
+ setHasChanges(true);
+
+ // Auto-save for simple toggles
+ if (typeof value === 'boolean') {
+ try {
+ await updateSetting(category, key, value);
+ setHasChanges(false);
+ } catch (error) {
+ // Revert on error
+ setLocalSettings(settings);
+ setHasChanges(false);
+ }
+ }
+ };
+
+ // Save all changes
+ const handleSaveChanges = async () => {
+ if (!localSettings || !hasChanges) return;
+
+ try {
+ await updateSettings(localSettings);
+ setHasChanges(false);
+ } catch (error) {
+ console.error('Failed to save settings:', error);
+ }
+ };
+
+ // Show confirmation dialog
+ const showConfirmation = (
+ title: string,
+ message: string,
+ onConfirm: () => void,
+ requiresConfirmation: boolean = false
+ ) => {
+ setConfirmationDialog({
+ isOpen: true,
+ title,
+ message,
+ severity: 'danger',
+ requiresConfirmation,
+ onConfirm: () => {
+ onConfirm();
+ setConfirmationDialog(prev => ({ ...prev, isOpen: false }));
+ },
+ onCancel: () => setConfirmationDialog(prev => ({ ...prev, isOpen: false })),
+ });
+ };
+
+ // Handle key rotation
+ const handleRotateKey = () => {
+ showConfirmation(
+ 'Rotate Security Key',
+ 'Rotating the security key will invalidate all existing agent connections. Agents will need to reconnect with the new key. This action cannot be undone.',
+ async () => {
+ await rotateSecurityKey({ reason: 'manual' });
+ refetch();
+ },
+ true
+ );
+ };
+
+ // Handle reset to defaults
+ const handleResetDefaults = () => {
+ showConfirmation(
+ 'Reset to Defaults',
+ 'This will reset all security settings to their default values. This may affect your system security. Type "RESET" to confirm.',
+ async () => {
+ await resetToDefaults();
+ setLocalSettings(null);
+ setHasChanges(false);
+ },
+ true
+ );
+ };
+
+ // Render tab content
+ const renderTabContent = () => {
+ if (!localSettings) {
+ return (
+
+ );
+ }
+
+ switch (activeTab) {
+ case 'overview':
+ return (
+
+ {securityOverview && (
+
({
+ name: name.replace('_', ' ').replace(/\b\w/g, l => l.toUpperCase()),
+ enabled: data.enabled,
+ status: data.status === 'healthy' ? 'healthy' : data.status === 'warning' ? 'warning' : 'error',
+ last_check: new Date().toISOString(),
+ details: data.status,
+ })) : [],
+ recent_events: securityOverview.alerts?.length || 0,
+ last_updated: new Date().toISOString(),
+ }}
+ onRefresh={refetch}
+ loading={loading}
+ />
+ )}
+
+ {/* Quick Actions */}
+
+ 0}
+ className="flex items-center justify-center gap-2 px-4 py-3 bg-blue-600 text-white rounded-lg hover:bg-blue-700 disabled:opacity-50 disabled:cursor-not-allowed"
+ >
+
+ {saving ? 'Saving...' : 'Save Changes'}
+
+
+
+
+ Export Settings
+
+
+ document.getElementById('import-file')?.click()}
+ className="flex items-center justify-center gap-2 px-4 py-3 bg-gray-600 text-white rounded-lg hover:bg-gray-700"
+ >
+
+ Import Settings
+
+ {
+ const file = e.target.files?.[0];
+ if (file) {
+ try {
+ await importSettings(file);
+ refetch();
+ } catch (error) {
+ console.error('Import failed:', error);
+ }
+ }
+ e.target.value = '';
+ }}
+ />
+
+
+
+ Reset to Defaults
+
+
+
+ {/* Status Summary */}
+
+
Security Status
+
+ {[
+ { name: 'Command Signing', status: localSettings.command_signing.enabled },
+ { name: 'Update Security', status: localSettings.update_security.enabled },
+ { name: 'Machine Binding', status: localSettings.machine_binding.enabled },
+ { name: 'Security Logging', status: true },
+ ].map((feature) => (
+
+ {feature.status ? (
+
+ ) : (
+
+ )}
+ {feature.name}
+
+ ))}
+
+
+
+ );
+
+ case 'command-signing':
+ return (
+ handleSettingChange('command_signing', key, value)}
+ disabled={loading}
+ error={error}
+ />
+ );
+
+ case 'update-security':
+ return (
+ handleSettingChange('update_security', key, value)}
+ disabled={loading}
+ error={error}
+ />
+ );
+
+ case 'machine-binding':
+ return (
+ handleSettingChange('machine_binding', key, value)}
+ disabled={loading}
+ error={error}
+ />
+ );
+
+ case 'logging':
+ return (
+ handleSettingChange('logging', key, value)}
+ disabled={loading}
+ error={error}
+ />
+ );
+
+ case 'key-management':
+ return (
+
+
handleSettingChange('key_management', key, value)}
+ disabled={loading}
+ error={error}
+ />
+
+ {/* Key Actions */}
+
+
Key Actions
+
+
+
+ Rotate Security Key
+
+
+ Generate a new signing key. The old key will remain valid during the grace period.
+
+
+
+
+ );
+
+ case 'events':
+ return ;
+
+ case 'audit':
+ return (
+
+
Audit Trail
+
Audit trail implementation coming soon...
+
+ );
+
+ default:
+ return null;
+ }
+ };
+
+ return (
+
+ {/* Header */}
+
+
+
+
+
+ Security Settings
+
+
+ Configure security features to protect your RedFlag deployment
+
+
+
setShowAdvanced(!showAdvanced)}
+ className="flex items-center gap-2 px-3 py-2 text-sm text-gray-600 hover:text-gray-900"
+ >
+ {showAdvanced ? : }
+ {showAdvanced ? 'Hide Advanced' : 'Show Advanced'}
+
+
+
+
+ {/* Error Display */}
+ {error && (
+
+ )}
+
+ {/* Validation Errors */}
+ {Object.keys(validationErrors).length > 0 && (
+
+
+
+ {Object.entries(validationErrors).map(([key, error]) => (
+ β’ {error}
+ ))}
+
+
+ )}
+
+ {/* Tabs */}
+
+
+ {tabs.map((tabItem) => (
+ setActiveTab(tabItem.id)}
+ className={`flex items-center gap-2 py-3 px-1 border-b-2 font-medium text-sm ${
+ activeTab === tabItem.id
+ ? 'border-blue-500 text-blue-600'
+ : 'border-transparent text-gray-500 hover:text-gray-700 hover:border-gray-300'
+ }`}
+ >
+
+ {tabItem.label}
+ {tabItem.id === 'events' && securityOverview?.alerts?.length > 0 && (
+
+ {securityOverview.alerts.length}
+
+ )}
+
+ ))}
+
+
+
+ {/* Tab Content */}
+
+ {renderTabContent()}
+
+
+ {/* Confirmation Dialog */}
+ {confirmationDialog.isOpen && (
+
+
+
+ {confirmationDialog.title}
+
+
+ {confirmationDialog.message}
+
+ {confirmationDialog.requiresConfirmation && (
+
+
+ Type "{confirmationDialog.title === 'Rotate Security Key' ? 'CONFIRM' : 'RESET'}" to proceed
+
+ {
+ const expected = confirmationDialog.title === 'Rotate Security Key' ? 'CONFIRM' : 'RESET';
+ if (e.target.value === expected) {
+ e.target.classList.remove('border-red-300');
+ e.target.classList.add('border-green-300');
+ } else {
+ e.target.classList.remove('border-green-300');
+ e.target.classList.add('border-red-300');
+ }
+ }}
+ />
+
+ )}
+
+
+ Cancel
+
+
+ Confirm
+
+
+
+
+ )}
+
+ );
+};
+
+export default SecuritySettings;
\ No newline at end of file
diff --git a/aggregator-web/src/pages/Setup.tsx b/aggregator-web/src/pages/Setup.tsx
index 69d36f5..62c2cf7 100644
--- a/aggregator-web/src/pages/Setup.tsx
+++ b/aggregator-web/src/pages/Setup.tsx
@@ -36,6 +36,7 @@ const Setup: React.FC = () => {
const [showDbPassword, setShowDbPassword] = useState(false);
const [signingKeys, setSigningKeys] = useState(null);
const [generatingKeys, setGeneratingKeys] = useState(false);
+ const [configType, setConfigType] = useState<'env' | 'swarm'>('env');
const [formData, setFormData] = useState({
adminUser: 'admin',
@@ -144,13 +145,14 @@ const Setup: React.FC = () => {
try {
const result = await setupApi.configure(formData);
- // Add signing keys to env content if generated
- let finalEnvContent = result.envContent || '';
- if (signingKeys && finalEnvContent) {
- finalEnvContent += `\n# Ed25519 Signing Keys (for agent updates)\nREDFLAG_SIGNING_PRIVATE_KEY=${signingKeys.private_key}\n`;
+ let configContent = '';
+ if (configType === 'env') {
+ configContent = generateEnvContent(result, signingKeys);
+ } else {
+ configContent = generateDockerSecretCommands(result, signingKeys);
}
- setEnvContent(finalEnvContent || null);
+ setEnvContent(configContent || null);
setShowSuccess(true);
toast.success(result.message || 'Configuration saved successfully!');
@@ -164,6 +166,62 @@ const Setup: React.FC = () => {
}
};
+ const generateEnvContent = (result: any, keys: SigningKeys | null): string => {
+ if (!result.envContent) return '';
+
+ let envContent = result.envContent;
+
+ if (keys) {
+ envContent += `\n# Ed25519 Signing Keys (for agent updates)\nREDFLAG_SIGNING_PRIVATE_KEY=${keys.private_key}\n`;
+ }
+
+ return envContent;
+ };
+
+ const generateDockerSecretCommands = (result: any, keys: SigningKeys | null): string => {
+ if (!result.envContent) return '';
+
+ // Parse the envContent to extract values
+ const envLines = result.envContent.split('\n');
+ const envVars: Record = {};
+
+ envLines.forEach(line => {
+ const match = line.match(/^([^#=]+)=(.+)$/);
+ if (match) {
+ envVars[match[1].trim()] = match[2].trim();
+ }
+ });
+
+ // Add signing keys if available
+ if (keys) {
+ envVars['REDFLAG_SIGNING_PRIVATE_KEY'] = keys.private_key;
+ }
+
+ // Generate Docker secret commands
+ const commands = [
+ '# RedFlag Docker Secrets Configuration',
+ '# Generated by web setup on 2025-12-13',
+ '# [WARNING] SECURITY CRITICAL: Backup the signing key or you will lose access to all agents',
+ '#',
+ '# Run these commands on your Docker host to create the secrets:',
+ '#',
+ `printf '%s' '${envVars['REDFLAG_ADMIN_PASSWORD'] || ''}' | docker secret create redflag_admin_password -`,
+ `printf '%s' '${envVars['REDFLAG_JWT_SECRET'] || ''}' | docker secret create redflag_jwt_secret -`,
+ `printf '%s' '${envVars['REDFLAG_DB_PASSWORD'] || ''}' | docker secret create redflag_db_password -`,
+ `printf '%s' '${envVars['REDFLAG_SIGNING_PRIVATE_KEY'] || ''}' | docker secret create redflag_signing_private_key -`,
+ '',
+ '# After creating the secrets, restart your RedFlag server:',
+ '# docker compose down && docker compose up -d',
+ '',
+ '# Optional: Save these values securely (password manager, encrypted storage)',
+ `# Admin Password: ${envVars['REDFLAG_ADMIN_PASSWORD'] || ''}`,
+ `# JWT Secret: ${envVars['REDFLAG_JWT_SECRET'] || ''}`,
+ `# DB Password: ${envVars['REDFLAG_DB_PASSWORD'] || ''}`,
+ ].join('\n');
+
+ return commands;
+ };
+
// Success screen with configuration display
if (showSuccess && envContent) {
return (
@@ -211,7 +269,22 @@ const Setup: React.FC = () => {
{/* Configuration Content Section */}
{envContent && (
-
Configuration File Content
+
+
+ {configType === 'env' ? 'Environment Configuration (.env)' : 'Docker Swarm Secrets'}
+
+
+ .env
+ setConfigType(configType === 'env' ? 'swarm' : 'env')}
+ className={`relative inline-flex h-6 w-11 items-center rounded-full ${configType === 'swarm' ? 'bg-indigo-600' : 'bg-gray-200'}`}
+ >
+
+
+ Swarm
+
+
+
-
{
- navigator.clipboard.writeText(envContent);
- toast.success('Configuration content copied to clipboard!');
- }}
- className="mt-3 w-full flex justify-center py-2 px-4 border border-transparent rounded-md text-sm font-medium text-white bg-green-600 hover:bg-green-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-green-500"
- >
- Copy Configuration Content
-
-
-
- Important: Copy this configuration content and save it to ./config/.env, then run docker-compose down && docker-compose up -d to apply the configuration.
-
-
+
+ {configType === 'env' ? (
+ <>
+
{
+ navigator.clipboard.writeText(envContent);
+ toast.success('.env content copied to clipboard!');
+ }}
+ className="mt-3 w-full flex justify-center py-2 px-4 border border-transparent rounded-md text-sm font-medium text-white bg-green-600 hover:bg-green-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-green-500"
+ >
+ Copy .env Content
+
+
+
+ Next Steps: Save this content to config/.env and run docker compose down && docker compose up -d to apply the configuration.
+
+
+
+
+ Security Note: The config/.env file contains sensitive credentials. Ensure it has restricted permissions (chmod 600) and is excluded from version control.
+
+
+ >
+ ) : (
+ <>
+
{
+ navigator.clipboard.writeText(envContent);
+ toast.success('Docker secret commands copied to clipboard!');
+ }}
+ className="mt-3 w-full flex justify-center py-2 px-4 border border-transparent rounded-md text-sm font-medium text-white bg-green-600 hover:bg-green-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-green-500"
+ >
+ Copy Docker Secret Commands
+
+
+
+ Requirements: Docker Swarm mode is required. Run docker swarm init on your Docker host before creating secrets.
+
+
+
+
+ Next Steps: Run the copied commands on your Docker host, then update docker-compose.yml to mount the secrets and restart.
+
+
+ >
+ )}
)}
-
+
{/* Next Steps */}
Next Steps
-
- Copy the configuration content using the green button above
- Save it to ./config/.env
- Run docker-compose down && docker-compose up -d
- Login to the dashboard with your admin username and password
-
+ {configType === 'env' ? (
+
+ Copy the .env content using the green button above
+ Save it to config/.env
+ Run docker compose down && docker compose up -d
+ Login to the dashboard with your admin username and password
+
+ ) : (
+
+ Initialize Docker Swarm: docker swarm init
+ Copy the Docker secret commands using the green button above
+ Run the commands on your Docker host to create the secrets
+ Update docker-compose.yml to mount the secrets
+ Restart RedFlag with docker compose down && docker compose up -d
+ Login to the dashboard with your admin username and password
+
+ )}
diff --git a/aggregator-web/src/pages/TokenManagement.tsx b/aggregator-web/src/pages/TokenManagement.tsx
index d50743a..fd1f9a6 100644
--- a/aggregator-web/src/pages/TokenManagement.tsx
+++ b/aggregator-web/src/pages/TokenManagement.tsx
@@ -110,13 +110,13 @@ const TokenManagement: React.FC = () => {
const copyInstallCommand = async (token: string) => {
const serverUrl = getServerUrl();
- const command = `curl -sfL ${serverUrl}/api/v1/install/linux | bash -s -- ${token}`;
+ const command = `curl -sfL "${serverUrl}/api/v1/install/linux?token=${token}" | sudo bash`;
await navigator.clipboard.writeText(command);
};
const generateInstallCommand = (token: string) => {
const serverUrl = getServerUrl();
- return `curl -sfL ${serverUrl}/api/v1/install/linux | bash -s -- ${token}`;
+ return `curl -sfL "${serverUrl}/api/v1/install/linux?token=${token}" | sudo bash`;
};
const getStatusColor = (token: RegistrationToken) => {
diff --git a/aggregator-web/src/pages/settings/AgentManagement.tsx b/aggregator-web/src/pages/settings/AgentManagement.tsx
index b299328..8749786 100644
--- a/aggregator-web/src/pages/settings/AgentManagement.tsx
+++ b/aggregator-web/src/pages/settings/AgentManagement.tsx
@@ -13,7 +13,8 @@ import {
RefreshCw,
Code,
FileText,
- Package
+ Package,
+ Key
} from 'lucide-react';
import { useRegistrationTokens } from '@/hooks/useRegistrationTokens';
import { toast } from 'react-hot-toast';
@@ -73,15 +74,15 @@ const AgentManagement: React.FC = () => {
if (platform.id === 'linux') {
if (token !== 'YOUR_REGISTRATION_TOKEN') {
- return `curl -sfL ${serverUrl}${platform.installScript} | sudo bash -s -- ${token}`;
+ return `curl -sfL "${serverUrl}${platform.installScript}?token=${token}" | sudo bash`;
} else {
- return `curl -sfL ${serverUrl}${platform.installScript} | sudo bash`;
+ return `curl -sfL "${serverUrl}${platform.installScript}" | sudo bash`;
}
} else if (platform.id === 'windows') {
if (token !== 'YOUR_REGISTRATION_TOKEN') {
- return `iwr ${serverUrl}${platform.installScript} -OutFile install.bat; .\\install.bat ${token}`;
+ return `iwr "${serverUrl}${platform.installScript}?token=${token}" -OutFile install.bat; .\\install.bat`;
} else {
- return `iwr ${serverUrl}${platform.installScript} -OutFile install.bat; .\\install.bat`;
+ return `iwr "${serverUrl}${platform.installScript}" -OutFile install.bat; .\\install.bat`;
}
}
return '';
@@ -93,15 +94,15 @@ const AgentManagement: React.FC = () => {
if (platform.id === 'windows') {
if (token !== 'YOUR_REGISTRATION_TOKEN') {
- return `# Download and run as Administrator with token\niwr ${serverUrl}${platform.installScript} -OutFile install.bat\n.\\install.bat ${token}`;
+ return `# Download and run as Administrator with token\niwr "${serverUrl}${platform.installScript}?token=${token}" -OutFile install.bat\n.\\install.bat`;
} else {
- return `# Download and run as Administrator\niwr ${serverUrl}${platform.installScript} -OutFile install.bat\n.\\install.bat`;
+ return `# Download and run as Administrator\niwr "${serverUrl}${platform.installScript}" -OutFile install.bat\n.\\install.bat`;
}
} else {
if (token !== 'YOUR_REGISTRATION_TOKEN') {
- return `# Download and run as root with token\ncurl -sfL ${serverUrl}${platform.installScript} | sudo bash -s -- ${token}`;
+ return `# Download and run as root with token\ncurl -sfL "${serverUrl}${platform.installScript}?token=${token}" | sudo bash`;
} else {
- return `# Download and run as root\ncurl -sfL ${serverUrl}${platform.installScript} | sudo bash`;
+ return `# Download and run as root\ncurl -sfL "${serverUrl}${platform.installScript}" | sudo bash`;
}
}
};
diff --git a/aggregator-web/src/types/security.ts b/aggregator-web/src/types/security.ts
new file mode 100644
index 0000000..74ab1ca
--- /dev/null
+++ b/aggregator-web/src/types/security.ts
@@ -0,0 +1,314 @@
+// Security Settings Types for RedFlag
+
+export interface SecuritySettings {
+ command_signing: CommandSigningSettings;
+ update_security: UpdateSecuritySettings;
+ machine_binding: MachineBindingSettings;
+ logging: LoggingSettings;
+ key_management: KeyManagementSettings;
+}
+
+export interface CommandSigningSettings {
+ enabled: boolean;
+ enforcement_mode: 'strict' | 'warning' | 'disabled';
+ algorithm: 'ed25519' | 'rsa' | 'ecdsa';
+ key_id?: string;
+}
+
+export interface UpdateSecuritySettings {
+ enabled: boolean;
+ enforcement_mode: 'strict' | 'warning' | 'disabled';
+ nonce_timeout_seconds: number;
+ require_signature_verification: boolean;
+ allowed_algorithms: string[];
+}
+
+export interface MachineBindingSettings {
+ enabled: boolean;
+ enforcement_mode: 'strict' | 'warning' | 'disabled';
+ binding_components: {
+ hardware_id: boolean;
+ bios_uuid: boolean;
+ mac_addresses: boolean;
+ cpu_id: boolean;
+ disk_serial: boolean;
+ };
+ violation_action: 'block' | 'warn' | 'log_only';
+ binding_grace_period_minutes: number;
+}
+
+export interface LoggingSettings {
+ log_level: 'debug' | 'info' | 'warn' | 'error';
+ retention_days: number;
+ log_failures: boolean;
+ log_successes: boolean;
+ log_to_file: boolean;
+ log_to_console: boolean;
+ export_format: 'json' | 'csv' | 'syslog';
+}
+
+export interface KeyManagementSettings {
+ current_key: {
+ key_id: string;
+ algorithm: string;
+ created_at: string;
+ expires_at?: string;
+ fingerprint: string;
+ };
+ auto_rotation: boolean;
+ rotation_interval_days: number;
+ grace_period_days: number;
+ key_history: KeyHistoryEntry[];
+}
+
+export interface KeyHistoryEntry {
+ key_id: string;
+ algorithm: string;
+ created_at: string;
+ retired_at: string;
+ reason: 'rotation' | 'compromise' | 'manual';
+}
+
+// UI Component Types
+export type SecuritySettingType =
+ | 'toggle'
+ | 'select'
+ | 'number'
+ | 'text'
+ | 'json'
+ | 'slider'
+ | 'checkbox-group';
+
+export interface SecuritySetting {
+ key: string;
+ label: string;
+ type: SecuritySettingType;
+ value: any;
+ default?: any;
+ description?: string;
+ options?: string[];
+ min?: number;
+ max?: number;
+ step?: number;
+ validation?: (value: any) => string | null;
+ disabled?: boolean;
+ sensitive?: boolean;
+}
+
+export interface SecurityCategorySectionProps {
+ title: string;
+ description: string;
+ settings: SecuritySetting[];
+ onSettingChange: (key: string, value: any) => void;
+ disabled?: boolean;
+ loading?: boolean;
+ error?: string | null;
+}
+
+export interface SecuritySettingProps {
+ setting: SecuritySetting;
+ onChange: (value: any) => void;
+ disabled?: boolean;
+ error?: string | null;
+}
+
+export interface SecurityStatus {
+ overall: 'healthy' | 'warning' | 'critical';
+ features: SecurityFeatureStatus[];
+ recent_events: number;
+ last_updated: string;
+}
+
+export interface SecurityFeatureStatus {
+ name: string;
+ enabled: boolean;
+ status: 'healthy' | 'warning' | 'error';
+ last_check: string;
+ details?: string;
+}
+
+export interface SecurityStatusCardProps {
+ status: SecurityStatus;
+ onRefresh?: () => void;
+ loading?: boolean;
+}
+
+// Security Events and Audit Trail
+export interface SecurityEvent {
+ id: string;
+ timestamp: string;
+ severity: 'info' | 'warn' | 'error' | 'critical';
+ category: 'command_signing' | 'update_security' | 'machine_binding' | 'key_management' | 'authentication';
+ event_type: string;
+ agent_id?: string;
+ user_id?: string;
+ message: string;
+ details: Record;
+ trace_id?: string;
+ correlation_id?: string;
+}
+
+export interface AuditEntry {
+ id: string;
+ timestamp: string;
+ user_id: string;
+ user_name: string;
+ action: string;
+ category: string;
+ setting_key: string;
+ old_value: any;
+ new_value: any;
+ ip_address: string;
+ user_agent: string;
+ reason?: string;
+}
+
+export interface SecurityEventsState {
+ events: SecurityEvent[];
+ loading: boolean;
+ error: string | null;
+ filters: EventFilters;
+ pagination: {
+ page: number;
+ pageSize: number;
+ total: number;
+ hasMore: boolean;
+ };
+ liveUpdates: boolean;
+}
+
+export interface EventFilters {
+ severity?: string[];
+ category?: string[];
+ date_range?: {
+ start: string;
+ end: string;
+ };
+ agent_id?: string;
+ user_id?: string;
+ search?: string;
+}
+
+export interface SecurityEventsProps {
+ events: SecurityEvent[];
+ loading?: boolean;
+ error?: string | null;
+ filters: EventFilters;
+ onFiltersChange: (filters: EventFilters) => void;
+ onEventSelect?: (event: SecurityEvent) => void;
+ onExport?: (format: 'json' | 'csv') => void;
+ pagination?: any;
+ liveUpdates?: boolean;
+ onToggleLiveUpdates?: () => void;
+}
+
+// Confirmation Dialog Types
+export interface ConfirmationDialogState {
+ isOpen: boolean;
+ title: string;
+ message: string;
+ details?: string;
+ severity: 'warning' | 'danger';
+ requiresConfirmation: boolean;
+ confirmationText?: string;
+ onConfirm: () => void;
+ onCancel: () => void;
+}
+
+// Security Settings State Management
+export interface SecuritySettingsState {
+ settings: SecuritySettings | null;
+ loading: boolean;
+ saving: boolean;
+ error: string | null;
+ errors: Record;
+ hasChanges: boolean;
+ validationStatus: 'valid' | 'invalid' | 'pending';
+ lastSaved: string | null;
+}
+
+// API Response Types
+export interface SecuritySettingsResponse {
+ settings: SecuritySettings;
+ updated_at: string;
+ version: string;
+}
+
+export interface SecurityAuditResponse {
+ audit_entries: AuditEntry[];
+ total: number;
+ page: number;
+ page_size: number;
+}
+
+export interface SecurityEventsResponse {
+ events: SecurityEvent[];
+ total: number;
+ page: number;
+ page_size: number;
+ has_more: boolean;
+}
+
+// Validation Rules
+export interface ValidationRule {
+ pattern?: RegExp;
+ min?: number;
+ max?: number;
+ required?: boolean;
+ custom?: (value: any) => string | null;
+}
+
+export interface SecurityValidationRules {
+ [key: string]: ValidationRule;
+}
+
+// WebSocket Types for Real-time Updates
+export interface SecurityWebSocketMessage {
+ type: 'security_event' | 'setting_changed' | 'status_updated';
+ data: any;
+ timestamp: string;
+}
+
+// Export and Import Types
+export interface SecurityExport {
+ timestamp: string;
+ version: string;
+ settings: SecuritySettings;
+ audit_trail: AuditEntry[];
+ metadata: {
+ exported_by: string;
+ export_reason: string;
+ };
+}
+
+// Machine Binding Detail Types
+export interface MachineFingerprint {
+ hardware_id: string;
+ bios_uuid: string;
+ mac_addresses: string[];
+ cpu_id: string;
+ disk_serials: string[];
+ hostname: string;
+ os_info: {
+ platform: string;
+ version: string;
+ architecture: string;
+ };
+ generated_at: string;
+ fingerprint_hash: string;
+}
+
+// Key Rotation Types
+export interface KeyRotationRequest {
+ reason: 'scheduled' | 'compromise' | 'manual';
+ grace_period_days?: number;
+ immediate?: boolean;
+}
+
+export interface KeyRotationResponse {
+ new_key_id: string;
+ old_key_id: string;
+ grace_period_ends: string;
+ rotation_complete: boolean;
+ affected_agents: number;
+}
\ No newline at end of file
diff --git a/aggregator/go.mod b/aggregator/go.mod
deleted file mode 100644
index c020602..0000000
--- a/aggregator/go.mod
+++ /dev/null
@@ -1,3 +0,0 @@
-module github.com/Fimeg/RedFlag/aggregator
-
-go 1.23.0
diff --git a/docker-compose.yml b/docker-compose.yml
index 2c5af43..57b9269 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -2,11 +2,6 @@ services:
postgres:
image: postgres:16-alpine
container_name: redflag-postgres
- environment:
- POSTGRES_DB: redflag
- POSTGRES_USER: redflag
- POSTGRES_PASSWORD: redflag_bootstrap
- POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./config/.env:/shared/.env