BogDB sample · Architecture

Tactical Messaging — Architecture

The architecture and design elaboration for the Tactical Messaging sample, produced through ASE.

Architecture Definition

Project: TacticalMessaging
Stage: Architecture Discovery
Status: Draft for review
Aligned to: system_intent.problem_brief.draft.003

1. Architectural Framing

TacticalMessaging is a reference example built on the BogDB NuGet package to demonstrate how a graph database and Cypher-style queries can model tactical messaging compliance, traceability, and review evidence.

The architecture is therefore optimized for:

  • a clear and credible graph domain model;
  • executable named Cypher queries against seeded sample data;
  • enough workflow surface to show request, review, decision, and audit behavior;
  • enough administration and reporting surface to demonstrate governance concepts;
  • strong traceability from requirement to component to test to policy to decision.

The architecture is not optimized for:

  • production-grade deployment;
  • live messaging transport;
  • enterprise identity integration by default;
  • broad standards coverage;
  • replacing formal program systems.

2. Architecture Goals

Primary goals

  • Demonstrate BogDB as a suitable graph store for compliance and traceability problems.
  • Make relationships first-class and queryable rather than embedded in document text.
  • Show realistic end-to-end answers to questions about requirements, policies, tests, and approvals.
  • Preserve an auditable chain of evidence for modeled review decisions.
  • Keep the sample runnable and understandable for .NET developers.

Quality priorities

  1. Clarity of model
  2. Query expressiveness
  3. Traceability fidelity
  4. Auditability
  5. Ease of local execution
  6. Illustrative workflow realism

3. Scope Boundary

In scope

  • Seeded graph data for tactical messaging compliance examples
  • Domain entities such as requirements, message types, vocabulary terms, translator components, test cases, policies, reviews, and decisions
  • Query library with documented Cypher examples
  • Example workflow for request submission, review, policy assessment, and decision capture
  • Simple administration capabilities for governed vocabularies and policy rules
  • Reporting views or outputs derived from Cypher queries

Out of scope

  • Operational tactical message transport
  • Classified data handling
  • Full implementation of Link 16 or VMF specifications
  • Production web platform concerns
  • Integration-heavy enterprise workflow orchestration
  • Formal accreditation or authority-to-operate support

4. Architectural Style

The recommended style is a modular sample application with a graph-centric core.

It separates into three layers carried forward from intent:

  1. Graph model layer — the durable domain nodes and relationships in BogDB
  2. Workflow surface layer — example-grade actions for intake, review, approval, and administration
  3. Query and reporting layer — named Cypher queries and optional simple report views

This should be implemented as a single deployable example solution with clear internal module boundaries rather than as distributed services.

5. High Level Subsystems

flowchart LR
    Users[Example Users]
    Sources[Seed Source Files]

    subgraph App[Sample Application]
        Intake[Intake and Review Surface]
        Admin[Admin Surface]
        Reports[Reporting and Query Surface]
        Domain[Domain Graph Services]
        Policy[Policy Evaluation]
        Seeds[Seed and Import Loader]
        Queries[Named Cypher Query Library]
    end

    Graph[(BogDB Graph Store)]

    Users --> Intake
    Users --> Admin
    Users --> Reports
    Sources --> Seeds
    Seeds --> Domain
    Intake --> Domain
    Admin --> Domain
    Reports --> Queries
    Domain --> Graph
    Policy --> Graph
    Queries --> Graph
    Domain --> Policy

6. Subsystem Responsibilities

6.1 Seed and Import Loader

Responsible for creating the example dataset used by the sample.

Responsibilities

  • Load hand-curated example data into BogDB
  • Create stable identifiers for requirements and related records
  • Seed illustrative standards fragments, policies, components, tests, and decisions
  • Support resetting the graph to a known example state

Notes

  • Seeded data is the default expected mode.
  • External imports, if added later, should remain optional and secondary.

6.2 Domain Graph Services

Application-facing logic for creating and relating domain objects.

Responsibilities

  • Create and update graph nodes and relationships
  • Enforce basic structural invariants
  • Encapsulate graph write patterns for workflow actions
  • Maintain audit metadata such as actor, time, rationale, and source reference

Examples

  • Create review request
  • Link translator component to requirement
  • Attach verification evidence to requirement
  • Record approval or rejection decision

6.3 Policy Evaluation

Supports validation of source-derived rules as first-class architecture behavior.

Responsibilities

  • Express policy checks over graph data
  • Evaluate naming and vocabulary constraints
  • Produce violation records or queryable findings
  • Support on-demand validation at minimum

Representative policy categories

  • Link 16 message naming format
  • VMF vocabulary restrictions
  • Requirement identifier convention checks
  • Completeness checks such as requirement without verification evidence

Decision guidance

  • For the reference example, on-demand evaluation via named queries is sufficient.
  • Write-time validation may be added if it improves the teaching value without overcomplicating the sample.

6.4 Named Cypher Query Library

A first-class subsystem, not just scattered examples.

Responsibilities

  • Provide curated, documented Cypher queries for common traceability and compliance questions
  • Give each query a stable name, purpose, inputs, and expected output shape
  • Serve both developer learning and reporting surfaces

Minimum expected query families

  • Requirement to component traceability
  • Requirement to test case traceability
  • Policy to affected nodes
  • Requirement decision and approval history
  • Open findings or compliance gaps

6.5 Intake and Review Surface

An example-grade workflow interface, which could be CLI, minimal web UI, or both.

Responsibilities

  • Capture a review request or submitted artifact package
  • Assign or record reviewer actions
  • Capture review comments, findings, and decisions
  • Show traceability and evidence for a selected item

Architectural constraint

  • This surface exists to demonstrate graph-backed workflow and audit semantics, not to be a rich operational case management product.

6.6 Admin Surface

Supports governance concepts needed to make the example credible.

Responsibilities

  • Manage policies and vocabularies
  • Inspect seed data and graph state
  • Trigger reseed or validation runs
  • Review domain metadata and allowed value sets

6.7 Reporting and Query Surface

Makes the graph legible to reviewers and developers.

Responsibilities

  • Execute named queries
  • Render tabular or textual outputs
  • Provide canned reports for common questions
  • Show linked evidence chains

7. Core Domain Concepts

The graph should model entities and relationships as first-class structures.

Core entities

  • Requirement
  • TranslatorComponent
  • TestCase
  • Policy
  • MessageType
  • VocabularyTerm
  • ReviewRequest
  • Review
  • Decision
  • Finding
  • EvidenceArtifact
  • StandardSource
  • Actor

Core relationships

  • TranslatorComponent -SATISFIES-> Requirement
  • Requirement -VERIFIED_BY-> TestCase
  • Policy -APPLIES_TO-> MessageType
  • Policy -GOVERNS-> VocabularyTerm
  • ReviewRequest -SUBMITTED_BY-> Actor
  • Review -REVIEWS-> ReviewRequest
  • Decision -DECIDES-> ReviewRequest
  • Decision -MADE_BY-> Actor
  • Decision -SUPPORTED_BY-> EvidenceArtifact
  • Requirement -DERIVED_FROM-> StandardSource
  • Finding -IDENTIFIES-> Policy
  • Finding -AFFECTS-> Requirement
  • Finding -AFFECTS-> MessageType

Domain modeling guidance

  • Reviews and decisions should remain distinct where useful.
  • Audit metadata may be represented either on nodes, on relationships, or both depending on BogDB capabilities and readability.
  • Stable identifiers are required for major node types so queries and examples are reproducible.

8. Representative Architecture Flow

This flow illustrates the core example behavior from submission through traceability and decision capture.

sequenceDiagram
    actor Submitter
    actor Reviewer
    participant Surface as Review Surface
    participant Domain as Domain Graph Services
    participant Policy as Policy Evaluation
    participant BogDB as BogDB Graph Store
    participant Query as Named Query Library

    Submitter->>Surface: Submit review request and artifacts
    Surface->>Domain: Create request review and evidence links
    Domain->>BogDB: Persist nodes and relationships
    Reviewer->>Surface: Open request
    Surface->>Query: Run requirement trace query
    Query->>BogDB: Match requirement component test links
    BogDB-->>Query: Traceability subgraph
    Query-->>Surface: Traceability result
    Surface->>Policy: Run policy checks
    Policy->>BogDB: Evaluate rule queries
    BogDB-->>Policy: Violations or pass results
    Policy-->>Surface: Findings
    Reviewer->>Surface: Record decision rationale
    Surface->>Domain: Persist decision and audit metadata
    Domain->>BogDB: Write decision evidence and actor links

9. Data and Storage Boundaries

System of record

BogDB is the authoritative store for the example graph.

Stored content categories

  • Seeded standards fragments and sample policies
  • Requirements and requirement identifiers
  • Components and satisfaction relationships
  • Test cases and verification relationships
  • Review requests, reviews, findings, and decisions
  • Actors and audit metadata
  • Named query definitions or references to them

Storage principles

  • Relationships should be explicit and navigable.
  • Source-derived rules should be represented in a queryable form.
  • Example data should be deterministic and resettable.
  • Audit records should be append-oriented where possible to preserve history.

10. Integration Posture

Because this is a reference example, the architecture should assume minimal external integration.

Required integrations

  • BogDB via NuGet package

Optional illustrative integrations

  • File-based seed import from JSON YAML or markdown-derived fixtures
  • Export of report output to markdown or CSV

Deferred integrations

  • Enterprise identity providers
  • Requirements repository synchronization
  • Test management systems
  • Document management systems
  • AI services

If any of these are later added, they should sit at the edge of the architecture and not reshape the graph-centric core.

11. Delivery Shape Guidance

Based on the updated intent, the preferred delivery shape is:

Library plus runnable sample application

This best balances developer learnability and architecture clarity:

  • a reusable domain and query layer in .NET;
  • a runnable sample host that seeds data and executes example flows;
  • optionally a minimal UI if it helps communicate the workflow.

Acceptable variants

  • Console or CLI sample if the goal is maximizing simplicity
  • Minimal web UI if reviewers need visual workflow and reporting navigation

Less preferred for current intent

  • Rich multi-user web application as the primary delivery form

12. Cross Cutting Concerns

Auditability

  • Every review and decision path should capture actor, timestamp, and rationale.
  • Evidence links should be queryable.
  • History should favor append over overwrite for decisions and findings.

Traceability

  • Requirement lineage must remain easy to traverse.
  • Queries should demonstrate traversal across standards, requirements, components, tests, and decisions.

Explainability

  • Query outputs should be understandable by humans, not only technically valid.
  • Named queries should include purpose and interpretation notes.

Security

  • Default assumption is local or limited-access sample usage.
  • Role distinctions may exist in the domain model without requiring full authentication infrastructure.

Testability

  • Seed data should support deterministic example scenarios.
  • Architecture should support repeatable query result verification.

13. Architectural Decisions and Direction

Decision AD 1

Graph database is the architectural center.
All major business concepts and trace relationships are represented in BogDB as first-class nodes and edges.

Decision AD 2

Named Cypher queries are a product feature of the example.
The query library is a primary architectural deliverable, not an implementation detail.

Decision AD 3

Workflow surfaces are illustrative and thin.
They exist to create and inspect graph state and audit trails, not to be comprehensive case management.

Decision AD 4

Seeded sample data is the default source.
The example should be runnable out of the box without dependency on external systems.

Decision AD 5

Policy validation is query-first.
On-demand policy evaluation through Cypher queries is the minimum architectural commitment.

14. Risks and Architecture Watchouts

Risk 1: Product creep

The architecture may drift toward a full compliance platform.

Mitigation

  • Keep optional UI and auth lightweight
  • Prefer demonstration value over feature breadth
  • Evaluate additions against the teaching goal

Risk 2: Overly sparse domain model

A too-minimal graph may fail to convince compliance-minded reviewers.

Mitigation

  • Include realistic relationships and at least one violation scenario
  • Preserve review decision and evidence semantics

Risk 3: Query examples divorced from workflow

If queries do not connect to visible actions, the sample may feel artificial.

Mitigation

  • Ensure at least one end-to-end flow creates state later queried by reports

Risk 4: BogDB capability mismatch

Some desired semantics may not map cleanly if assumptions about graph features are wrong.

Mitigation

  • Validate early against actual BogDB API and query capabilities
  • Favor simple, portable graph patterns

15. Open Architectural Decisions

  1. Anchor workflow — Which single scenario should be the primary narrative: compliance review request, translator certification support, or change request intake?
  2. Primary host — Should the runnable sample be CLI-first, minimal web-first, or dual-surface?
  3. Policy execution mode — On-demand only, or also during write operations?
  4. Seed data depth — How much Link 16 and VMF sample content is enough to feel credible without overwhelming the example?
  5. Report format — Are named query outputs alone sufficient, or is a simple dashboard needed for stakeholder readability?

Use this architecture to drive the next-stage artifacts in three tracks:

  1. Data model definition for graph nodes, properties, relationships, and identifiers
  2. Workflow definition for submission, review, finding, and decision capture
  3. Query catalog definition for named Cypher queries and expected outputs

These should remain tightly aligned so that every workflow action creates graph state that at least one named query can later interrogate.

Cookie Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, privacy policy and terms of service.