Skip to content

Conversation

@dogancanbakir
Copy link
Member

@dogancanbakir dogancanbakir commented Jan 6, 2026

Summary

Adds support for storing httpx scan results directly to databases (MongoDB, PostgreSQL, MySQL) with both CLI flags and YAML config file options.

  • MongoDB: Stores results as BSON documents with indexes on timestamp, url, host, status_code, and tech
  • PostgreSQL: Creates table with 62 columns matching Result struct fields, uses JSONB for complex types and TEXT[] for arrays
  • MySQL: Similar schema to PostgreSQL with JSON type for complex fields and appropriate TEXT types

Features

  • Batched writes for performance (configurable batch size, default 100)
  • Auto-flush with configurable interval (default 1 minute)
  • Individual columns for each Result field (not a single JSON blob)
  • Environment variable support: HTTPX_DB_CONNECTION_STRING
  • Option to omit raw request/response data (-rdbor)

New CLI Flags (OUTPUT group)

Flag Description
-rdb, -result-db Enable database storage
-rdbc, -result-db-config Path to YAML config file
-rdbt, -result-db-type Database type (mongodb, postgres, mysql)
-rdbcs, -result-db-conn Connection string
-rdbn, -result-db-name Database name (default: httpx)
-rdbtb, -result-db-table Table/collection name (default: results)
-rdbbs, -result-db-batch-size Batch size (default: 100)
-rdbor, -result-db-omit-raw Omit raw request/response data

Usage Examples

# MongoDB
httpx -l hosts.txt -rdb -rdbt mongodb -rdbcs "mongodb://localhost:27017"

# PostgreSQL
httpx -l hosts.txt -rdb -rdbt postgres -rdbcs "postgres://user:pass@localhost:5432/httpx"

# MySQL
httpx -l hosts.txt -rdb -rdbt mysql -rdbcs "user:pass@tcp(localhost:3306)/httpx"

# Using config file
httpx -l hosts.txt -rdb -rdbc /path/to/db-config.yaml

# Using environment variable
export HTTPX_DB_CONNECTION_STRING="mongodb://localhost:27017"
httpx -l hosts.txt -rdb -rdbt mongodb

Example Config File (db-config.yaml)

type: mongodb
connection-string: "mongodb://localhost:27017"
database-name: httpx
table-name: results
batch-size: 100
flush-interval: 1m
omit-raw: false

Testing with Docker

1. Start Database Containers

# MongoDB
docker run -d --name httpx-mongo -p 27017:27017 mongo:latest

# PostgreSQL
docker run -d --name httpx-postgres -p 5432:5432 -e POSTGRES_PASSWORD=password -e POSTGRES_DB=httpx postgres:latest

# MySQL
docker run -d --name httpx-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=httpx mysql:latest

2. Test httpx with Each Database

# Test MongoDB
echo "https://example.com" | go run ./cmd/httpx -rdb -rdbt mongodb -rdbcs "mongodb://localhost:27017"

# Test PostgreSQL
echo "https://example.com" | go run ./cmd/httpx -rdb -rdbt postgres -rdbcs "postgres://postgres:password@localhost:5432/httpx?sslmode=disable"

# Test MySQL
echo "https://example.com" | go run ./cmd/httpx -rdb -rdbt mysql -rdbcs "root:password@tcp(localhost:3306)/httpx"

3. Verify Results in Database

# MongoDB - check results
docker exec -it httpx-mongo mongosh --eval "db.getSiblingDB('httpx').results.find().pretty()"

# PostgreSQL - check results
docker exec -it httpx-postgres psql -U postgres -d httpx -c "SELECT url, status_code, title FROM results;"

# MySQL - check results
docker exec -it httpx-mysql mysql -uroot -ppassword -e "SELECT url, status_code, title FROM httpx.results;"

4. Cleanup

docker rm -f httpx-mongo httpx-postgres httpx-mysql

Test plan

  • Test MongoDB storage with Docker container
  • Test PostgreSQL storage with Docker container
  • Test MySQL storage with Docker container
  • Verify all Result fields are stored correctly
  • Verify batching works as expected
  • Verify graceful shutdown flushes remaining results

Closes #1973
Closes #2360
Closes #2361
Closes #2362

Summary by CodeRabbit

  • New Features

    • Store scan results to MongoDB, PostgreSQL, or MySQL via new result-db flags or config file; configurable batch size, flush interval, and option to omit raw request/response data.
    • Database output runs alongside existing result handlers with graceful shutdown and batching.
  • Documentation

    • README CLI help text refined: spacing, wording, and protocol option descriptions (e.g., http2, http3) updated.

✏️ Tip: You can customize this high-level summary in your review settings.

Add support for storing httpx scan results directly to databases with
both CLI flags and YAML config file options.

Supported databases:
- MongoDB
- PostgreSQL
- MySQL

Features:
- Batched writes for performance (configurable batch size)
- Auto-flush with configurable interval
- Individual columns for each Result field (not JSON blob)
- Support for environment variable HTTPX_DB_CONNECTION_STRING
- Option to omit raw request/response data (-rdbor)

New CLI flags under OUTPUT group:
- -rdb, -result-db: enable database storage
- -rdbc, -result-db-config: path to YAML config file
- -rdbt, -result-db-type: database type (mongodb, postgres, mysql)
- -rdbcs, -result-db-conn: connection string
- -rdbn, -result-db-name: database name (default: httpx)
- -rdbtb, -result-db-table: table/collection name (default: results)
- -rdbbs, -result-db-batch-size: batch size (default: 100)
- -rdbor, -result-db-omit-raw: omit raw data

Closes #1973
Closes #2360
Closes #2361
Closes #2362
@auto-assign auto-assign bot requested a review from dwisiswant0 January 6, 2026 09:54
@coderabbitai
Copy link

coderabbitai bot commented Jan 6, 2026

Walkthrough

Adds pluggable database result storage (MongoDB, PostgreSQL, MySQL) with CLI flags, config loading, a registry/factory, batched writer, schema management, and runner integration to persist scan results.

Changes

Cohort / File(s) Summary
Core DB infra & config
internal/db/db.go, internal/db/config.go, internal/db/writer.go
Registry/factory for database backends; Config/Options with YAML loading, defaults and validation; Writer that batches results, background flush loop, graceful shutdown, and writer callback integration.
MongoDB adapter
internal/db/mongodb.go
MongoDB implementation: connect/ping, collection selection, index creation, result→BSON conversion, InsertMany batch writes.
PostgreSQL adapter
internal/db/postgres.go
Postgres implementation: connection management, EnsureSchema (table/indexes), transactional batch inserts, JSONB/array serialization, error handling.
MySQL adapter
internal/db/mysql.go
MySQL implementation: connection, schema creation, transactional multi-row inserts, JSON marshaling for nested fields, prepared statements, error handling.
Runner & CLI integration
runner/options.go, cmd/httpx/httpx.go
New runner Options fields and CLI flags for result DB config (--result-db, --result-db-config, --result-db-type, etc.); setupDatabaseOutput invoked during startup to wire writer into OnResult/OnClose callbacks.
Deps & docs
go.mod, README.md
Database driver dependencies added (MongoDB, lib/pq, go-sql-driver/mysql, yaml); README cosmetic/help text updates.

Sequence Diagram(s)

sequenceDiagram
    participant CLI as CLI
    participant Runner as httpx Runner
    participant Writer as DB Writer (callback)
    participant Batcher as Batcher Loop
    participant DB as Database (Mongo/Postgres/MySQL)

    CLI->>Runner: start with --result-db / config
    Runner->>Writer: NewWriter(ctx, cfg) / register OnResult & OnClose

    loop each scan result
        Runner->>Writer: OnResult(result)
        Writer->>Writer: enqueue to channel (non-blocking)
    end

    alt background flush triggers
        Batcher->>Writer: drain channel (batch)
        Batcher->>DB: InsertBatch(results)
        DB-->>Batcher: ok / error
        Batcher->>Writer: update counters / logs
    end

    CLI->>Runner: finish / interrupt
    Runner->>Writer: Close()
    Writer->>Batcher: cancel, flush remaining
    Batcher->>DB: final InsertBatch
    Writer->>DB: Close connection
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 Hop, hop, results hop along,
Into boxes—rows and docs so strong.
Mongo, Postgres, MySQL cheer,
Batched and flushed, the data's near.
I nibble logs and count each byte—hooray!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 18.75% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title clearly and concisely describes the main feature addition: database output support for storing scan results.
Linked Issues check ✅ Passed The pull request implements all requirements from linked issues #1973, #2360, #2361, #2362: CLI flag support, MongoDB/PostgreSQL/MySQL backends, YAML config, batching, and proper schema with indexed fields.
Out of Scope Changes check ✅ Passed All changes are directly related to implementing database output support. Documentation updates (README.md) are supporting changes; no unrelated modifications detected.
✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e64dfcb and 4f010d6.

📒 Files selected for processing (3)
  • internal/db/mysql.go
  • internal/db/postgres.go
  • internal/db/writer.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • internal/db/mysql.go
  • internal/db/writer.go
  • internal/db/postgres.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Test Builds (macOS-latest)
  • GitHub Check: Test Builds (ubuntu-latest)
  • GitHub Check: Test Builds (windows-latest)
  • GitHub Check: Functional Test (ubuntu-latest)
  • GitHub Check: Analyze (go)
  • GitHub Check: Functional Test (macOS-latest)
  • GitHub Check: Functional Test (windows-latest)

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI Agents
In @internal/db/mysql.go:
- Around line 47-152: EnsureSchema currently injects m.cfg.TableName directly
into the CREATE TABLE SQL; validate or safely quote the identifier before
interpolation to prevent SQL injection (e.g., enforce a safe regex like
^[A-Za-z0-9_]+$ on m.cfg.TableName and/or wrap the identifier in backticks after
rejecting/escaping any backticks). Update mysqlDatabase.EnsureSchema to use the
validated/quoted table name variable when building the schema string (replace
direct use of m.cfg.TableName). Also add a matching index on url (e.g., add
`INDEX idx_url (url)`) to the CREATE TABLE statement for parity with the
Postgres schema.

In @internal/db/postgres.go:
- Around line 48-161: The SQL builds in the schema variable interpolate
p.cfg.TableName directly (and into index names like idx_%s_timestamp and later
INSERT queries), which risks SQL injection; update the code to quote identifiers
using pq.QuoteIdentifier(p.cfg.TableName) (or an equivalent safe
identifier-quoting helper) everywhere the table name or derived index names are
inserted (including schema, index names, and any INSERT/SELECT/DROP/ALTER
statements) so that the table and index identifiers are safely escaped before
passing into fmt.Sprintf.

In @internal/db/writer.go:
- Around line 130-137: The Close method races with sends from the callback
returned by GetWriterCallback: after calling w.cancel() but before close(w.data)
a resumed callback can see both w.ctx.Done() and w.data ready and attempt to
send into a closed channel, causing a panic; update GetWriterCallback (the
sending goroutine) to check w.closed (or use an atomic/boolean helper like
w.isClosed()) before selecting/send and bail out if closed, or instead change
Close to close w.data before/atomically with cancellation (or use a mutex to
synchronize) so that the callback never attempts a send after closure; modify
either Writer.Close, GetWriterCallback, or both to perform this
closed-check/synchronization using the existing w.closed, w.cancel, w.data, and
w.ctx symbols.
🧹 Nitpick comments (12)
internal/db/config.go (1)

39-49: Consider stricter validation for numeric fields.

The current validation only checks database type and connection string. BatchSize and FlushInterval are validated indirectly through ApplyDefaults (which replaces non-positive values with defaults). While this works, it means explicitly setting negative values gets silently corrected rather than rejected.

Consider adding explicit validation to reject negative values as invalid input, which would make configuration errors more apparent to users.

🔎 Example stricter validation
 func (c *Config) Validate() error {
 	if !c.Type.IsValid() {
 		return fmt.Errorf("invalid database type: %s (supported: %v)", c.Type, SupportedDatabases())
 	}
 
 	if c.ConnectionString == "" {
 		return fmt.Errorf("connection string is required")
 	}
+
+	if c.BatchSize < 0 {
+		return fmt.Errorf("batch size cannot be negative")
+	}
+
+	if c.FlushInterval < 0 {
+		return fmt.Errorf("flush interval cannot be negative")
+	}
 
 	return nil
 }
internal/db/mongodb.go (1)

115-127: Consider direct BSON marshaling for efficiency.

The JSON round-trip (marshal to JSON, then unmarshal to BSON) is simple and reliable but adds overhead. For scanning workloads with moderate throughput, this approach is acceptable and prioritizes maintainability.

If performance profiling reveals this as a bottleneck, consider using BSON tags on the runner.Result struct and marshaling directly to BSON.

internal/db/writer.go (2)

67-71: Consider also omitting RawHeaders when omitRaw is enabled.

The omitRaw option clears Raw, Request, and ResponseBody, but RawHeaders (which contains the raw HTTP response headers) is not cleared. If the intent is to reduce storage of raw HTTP data, this field should likely be included.

🔎 Proposed fix
 		if w.omitRaw {
 			r.Raw = ""
 			r.Request = ""
 			r.ResponseBody = ""
+			r.RawHeaders = ""
 		}

96-101: Failed batch data is discarded without retry.

When InsertBatch fails, the error is logged but the batch is cleared at line 103, permanently losing those results. For reliability, consider implementing a retry mechanism or dead-letter queue for failed batches.

internal/db/db.go (2)

22-29: IsValid() and SupportedDatabases() may diverge if new types are added.

Both functions enumerate the valid database types independently. If a new database type is added, both must be updated. Consider deriving one from the other to ensure consistency.

🔎 Proposed fix
+var supportedDatabases = []DatabaseType{MongoDB, PostgreSQL, MySQL}
+
 func (d DatabaseType) IsValid() bool {
-	switch d {
-	case MongoDB, PostgreSQL, MySQL:
-		return true
-	default:
-		return false
+	for _, db := range supportedDatabases {
+		if d == db {
+			return true
+		}
 	}
+	return false
 }

 func SupportedDatabases() []DatabaseType {
-	return []DatabaseType{MongoDB, PostgreSQL, MySQL}
+	return supportedDatabases
 }

Also applies to: 68-70


43-49: Registry map lacks synchronization, but safe given init-only writes.

The registry map is written to only during init() calls, which are guaranteed to run sequentially before main(). This is safe, but adding a comment documenting this assumption would help future maintainers.

runner/options.go (1)

680-801: No validation added for database options.

ValidateOptions() doesn't validate the new database-related flags. Consider adding validation to ensure:

  • If -rdb is enabled, either -rdbc or (-rdbt + -rdbcs) must be provided
  • Database type is one of the supported values
internal/db/postgres.go (2)

218-227: JSON marshaling errors are silently ignored.

All json.Marshal calls ignore the error return value. While marshaling struct types rarely fails, nil pointer dereferences or custom MarshalJSON implementations could fail. Consider at minimum logging errors or using a helper that returns []byte("null") on failure.

🔎 Proposed helper pattern
func mustMarshalJSON(v interface{}) []byte {
    data, err := json.Marshal(v)
    if err != nil {
        return []byte("null")
    }
    return data
}

171-252: Consider using COPY for better batch insert performance.

For PostgreSQL, using COPY protocol (via pq.CopyIn) is significantly faster for bulk inserts compared to prepared statement execution in a loop. This could improve performance for large batch sizes.

internal/db/mysql.go (3)

56-60: VARCHAR length constraints may cause data truncation.

Several VARCHAR columns have fixed limits that may truncate real-world data:

  • host VARCHAR(255) - most hostnames fit, but some edge cases exist
  • method VARCHAR(10) - custom methods could exceed this
  • port VARCHAR(10) - reasonable
  • scheme VARCHAR(10) - reasonable

Consider using TEXT for method or increasing the limit to handle custom HTTP methods.


209-227: JSON marshaling errors silently ignored.

Same issue as PostgreSQL implementation - all json.Marshal errors are discarded. Apply the same fix pattern as suggested for postgres.go.


162-252: Consider multi-value INSERT for better MySQL batch performance.

MySQL supports multi-value INSERT syntax (INSERT INTO t VALUES (...), (...), ...) which is faster than executing prepared statements in a loop. This could significantly improve batch insert performance.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc2c7a2 and e64dfcb.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (10)
  • README.md
  • cmd/httpx/httpx.go
  • go.mod
  • internal/db/config.go
  • internal/db/db.go
  • internal/db/mongodb.go
  • internal/db/mysql.go
  • internal/db/postgres.go
  • internal/db/writer.go
  • runner/options.go
🧰 Additional context used
🧬 Code graph analysis (5)
internal/db/writer.go (4)
internal/db/db.go (2)
  • Database (31-41)
  • NewDatabase (51-66)
internal/db/config.go (1)
  • Config (23-37)
runner/types.go (1)
  • Result (35-105)
runner/options.go (1)
  • OnResultCallback (51-51)
internal/db/postgres.go (5)
internal/db/db.go (4)
  • Register (47-49)
  • PostgreSQL (14-14)
  • Database (31-41)
  • DatabaseType (10-10)
internal/db/config.go (1)
  • Config (23-37)
runner/types.go (2)
  • Result (35-105)
  • Trace (107-123)
common/httpx/csp.go (1)
  • CSPData (25-28)
common/httpx/proto.go (1)
  • HTTP2 (8-8)
internal/db/db.go (2)
runner/types.go (1)
  • Result (35-105)
internal/db/config.go (1)
  • Config (23-37)
internal/db/mysql.go (5)
internal/db/db.go (4)
  • Register (47-49)
  • MySQL (15-15)
  • Database (31-41)
  • DatabaseType (10-10)
internal/db/config.go (1)
  • Config (23-37)
runner/types.go (2)
  • Result (35-105)
  • Trace (107-123)
common/httpx/csp.go (1)
  • CSPData (25-28)
common/httpx/proto.go (1)
  • HTTP2 (8-8)
internal/db/config.go (1)
internal/db/db.go (2)
  • DatabaseType (10-10)
  • SupportedDatabases (68-70)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Functional Test (macOS-latest)
  • GitHub Check: Functional Test (windows-latest)
  • GitHub Check: Functional Test (ubuntu-latest)
  • GitHub Check: Lint Test
  • GitHub Check: Analyze (go)
  • GitHub Check: release-test
🔇 Additional comments (19)
README.md (2)

100-166: LGTM - Documentation alignment improvements.

The cosmetic adjustments to spacing and alignment improve readability and consistency across the CLI help sections.


206-213: LGTM - Database output flags properly documented.

The new database-related flags (-rdb, -rdbc, -rdbt, etc.) are clearly documented with appropriate defaults and descriptions, aligning well with the PR objectives to add database persistence support.

internal/db/config.go (3)

11-21: LGTM - Well-chosen defaults.

The default values are sensible: batch size of 100 balances throughput and memory, 1-minute flush interval prevents data loss, and the naming conventions are clear.


69-91: LGTM - Correct configuration loading flow.

The function properly handles YAML parsing, environment variable fallback for connection strings, default application, and validation in the correct order. Error wrapping provides good context.


104-125: LGTM - Consistent CLI-to-config conversion.

The ToConfig method correctly mirrors the file-based loading logic, ensuring both configuration paths (file vs. CLI flags) go through the same validation and defaulting pipeline.

cmd/httpx/httpx.go (2)

68-69: LGTM - Correct integration placement.

The database output setup is called at the appropriate point in the initialization sequence, after asset upload setup and before runner creation, allowing it to configure the result callbacks properly.


151-208: LGTM - Correct callback chaining and error handling.

The implementation properly:

  • Loads configuration from file or CLI options with appropriate error handling
  • Chains callbacks to preserve existing handlers (asset upload, etc.)
  • Orders operations correctly: existing OnResult callback executes first, writer closes before existing OnClose
  • Uses Fatal for setup errors, which is appropriate at initialization time
internal/db/mongodb.go (3)

15-28: LGTM - Standard factory registration pattern.

The init-based registration integrates cleanly with the database registry, and the constructor properly defers connection establishment to the explicit Connect() call.


30-59: LGTM - Robust connection handling with appropriate timeouts.

The Connect method uses reasonable timeouts (10s for connection/selection, ping verification) and Close properly handles cleanup with a 5s timeout to prevent blocking during shutdown.


61-87: No action needed. MongoDB's CreateMany() is idempotent—re-creating indexes that already exist with identical definitions is a no-op and succeeds without error. The index definitions are appropriate for the query patterns, and the implementation will work correctly on subsequent application runs.

go.mod (1)

55-61: Dependency versions are appropriate; no critical security concerns identified.

The versions in go.mod are current and safe:

  • mysql (v1.9.3), lib/pq (v1.10.9): Latest stable releases. CVE-2025-24787 and CVE-2025-1094/4207 are in upstream components (WhoDB and PostgreSQL libpq C library respectively), not these Go drivers.
  • mongo-driver (v1.17.6): On the maintained v1.x branch; v2.4 is available but v1.x remains actively supported.
  • mapstructure/v2 (v2.4.0): Confirmed as the patched version that fixed the information-leak vulnerability (present in versions < 2.4.0).
  • gocsv and publicsuffix-go: Latest versions with no reported security vulnerabilities.
internal/db/writer.go (2)

26-59: LGTM!

The NewWriter function properly initializes the database connection, ensures schema, and starts the background goroutine with appropriate lifecycle management.


146-148: LGTM!

Simple atomic counter accessor.

internal/db/db.go (2)

31-41: LGTM!

The Database interface is well-defined with clear lifecycle methods and batch insert support.


51-66: LGTM!

Good defensive validation before factory invocation.

runner/options.go (2)

354-361: LGTM!

New database configuration fields are appropriately added to the Options struct.


498-505: Environment variable HTTPX_DB_CONNECTION_STRING is properly wired.

The flag description for -rdbcs correctly mentions the environment variable. While the flag definition doesn't use goflags.EnvVar, the environment variable is properly handled as a fallback in internal/db/config.go (lines 81 and 115): if no connection string is provided via the flag, the code reads from HTTPX_DB_CONNECTION_STRING via os.Getenv(). This is a valid design pattern where the flag takes precedence and the environment variable serves as a fallback.

Likely an incorrect or invalid review comment.

internal/db/postgres.go (1)

26-38: LGTM!

Connect implementation properly opens connection and pings to verify connectivity.

internal/db/mysql.go (1)

26-38: LGTM!

Connect implementation correctly opens and pings MySQL database.

- Fix SQL injection in postgres.go using pq.QuoteIdentifier for table/index names
- Fix SQL injection in mysql.go using custom quoteIdentifier function
- Fix race condition in writer.go by checking closed state before channel send
- Add missing idx_url index in mysql.go for parity with PostgreSQL schema
- Include RawHeaders in omitRaw check for consistency
Copy link
Member

@Mzack9999 Mzack9999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall lgtm, some minor suggestion plus what do you think about moving the query to sqlc for easier maintenance later (moving the query to sqlc file and generate glue code with sqlc generate)?

}
}

func (w *Writer) run() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flagSet.BoolVarP(&options.ResultDatabase, "result-db", "rdb", false, "store results in database"),
flagSet.StringVarP(&options.ResultDatabaseConfig, "result-db-config", "rdbc", "", "path to database config file"),
flagSet.StringVarP(&options.ResultDatabaseType, "result-db-type", "rdbt", "", "database type (mongodb, postgres, mysql)"),
flagSet.StringVarP(&options.ResultDatabaseConnStr, "result-db-conn", "rdbcs", "", "database connection string (env: HTTPX_DB_CONNECTION_STRING)"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can try to use goflags.StringVarEnv

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

3 participants