Add initial project structure

- add base structure
 - unify the proto metrics creation and propagation
 - implement arp and openvpn
 - refactor to meet the prom exporter standart
 - add instance label to the metrics
 - refactor the call chain
 - add gateway, unbound_dns and openvpn implementations
 - add gateway stuff
 - structure refactor; mod clean; cron implementation
 - implement cron in the collector; refactor utils in the opnsense package

refactor names and implement option functions to disable collectorInstances

add GH action workflows

Create codeql.yml

- clean

fix stuff
This commit is contained in:
ihatemodels 2023-11-06 15:49:15 +02:00
commit 24e8161262
944 changed files with 421292 additions and 0 deletions

7
.dockerignore Normal file
View file

@ -0,0 +1,7 @@
./opnsense-exporter-local
README.md
.git/
.gitignore
.github
LICENSE
.golangci.yml

32
.github/workflows/ci.yml vendored Normal file
View file

@ -0,0 +1,32 @@
name: CI
on:
push:
tags:
- "v*.*.*"
pull_request:
branches:
- "main"
jobs:
tests:
name: Tests/Linters
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: '1.21' # Use the version of Go your project requires
- name: Check out code
uses: actions/checkout@v2
- name: "Run Linters"
uses: golangci/golangci-lint-action@latest
with:
version: latest
args: --verbose
- name: Run tests
run: go test -v ./...

58
.github/workflows/codeql.yml vendored Normal file
View file

@ -0,0 +1,58 @@
name: codeql
on:
push:
branches:
- 'main'
paths-ignore:
- '**/*.md'
- '**/*.txt'
- '**/*.yaml'
- '**/*_test.go'
pull_request:
branches:
- 'main'
paths-ignore:
- '**/*.md'
- '**/*.txt'
- '**/*.yaml'
- '**/*_test.go'
jobs:
analyze:
name: Analyze
runs-on: 'ubuntu-latest'
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language:
- go
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: go.mod
check-latest: true
-
name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
-
name: Autobuild
uses: github/codeql-action/autobuild@v2
-
name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

23
.gitignore vendored Normal file
View file

@ -0,0 +1,23 @@
# If you prefer the allow list template instead of the deny list, see community template:
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
#
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work
*opnsense-exporter-local
local.Makefile

0
.golangci.yml Normal file
View file

0
Dockerfile Normal file
View file

40
Makefile Normal file
View file

@ -0,0 +1,40 @@
BINARY_NAME=opnsense-exporter-local
.PHONY: default
default: run-test
sync-vendor:
go mod tidy
go mod vendor
local-run:
go build \
-tags osusergo,netgo \
-ldflags '-w -extldflags "-static" -X main.version=local-test' \
-v -o ${BINARY_NAME}
./${BINARY_NAME} --log.level="debug" \
--log.format="logfmt" \
--web.telemetry-path="/metrics" \
--web.listen-address=":8080" \
--runtime.gomaxprocs=4 \
--exporter.instance-label="opnsense-eu1" \
--exporter.disable-arp-table \
--exporter.disable-cron-table \
--opnsense.protocol="https" \
--opnsense.address="ops.domain.com" \
--opnsense.api-key="XXX" \
--opnsense.api-secret="XXX" \
--web.disable-exporter-metrics \
test:
go test -v ./...
clean:
gofmt -s -w $(shell find . -type f -name '*.go'| grep -v "/vendor/\|/.git/")
go clean
rm ./${BINARY_NAME}
lint:
gofmt -s -w $(shell find . -type f -name '*.go'| grep -v "/vendor/\|/.git/")
golangci-lint run --fix

81
README.md Normal file
View file

@ -0,0 +1,81 @@
# OPNsense Prometheus Exporter
The OPNsense exporter enables you to monitor your OPNsense firewall from the API.
`Still under heavy development. The full metrics list is not yet implemented.`
# Table of Contents
1. **[OPNsense User Permissions](#opnsense-user-permissions)**
2. **[Usage](#usage)**
3. **[Configuration](#configuration)**
- **[SSL/TLS](#ssltls)**
5. **[Grafana Dashboard](#grafana-dashboard)**
## OPNsense user permissions
**TODO**
## Usage
**TODO**
## Configuration
To configure where your OPNsense API is located, you can use the following flags:
- `--opnsense.protocol` - The protocol to use to connect to the OPNsense API. Can be either `http` or `https`.
- `--opnsense.address` - The hostname or IP address of the OPNsense API.
- `--opnsense.api-key` - The API key to use to connect to the OPNsense API.
- `--opnsense.api-secret` - The API secret to use to connect to the OPNsense API
### SSL/TLS
- `--opnsense.insecure` - Disable TLS certificate verification. Not recommendet. Defaults to `false`.
- If you have your api served with self-signed certificates. You should add them to the system trust store.
TODO: add Docker example
```bash
You can disable parts of the exporter using the following flags:
- `--exporter.disable-arp-table` - Disable the scraping of the ARP table. Defaults to `false`.
- `--exporter.disable-cron-table` - Disable the scraping of the cron table. Defaults to `false`.
Full list
```bash
Flags:
-h, --[no-]help Show context-sensitive help (also try --help-long and --help-man).
--log.level="info" Log level. One of: [debug, info, warn, error]
--log.format="logfmt" Log format. One of: [logfmt, json]
--web.telemetry-path="/metrics"
Path under which to expose metrics.
--[no-]web.disable-exporter-metrics
Exclude metrics about the exporter itself (promhttp_*, process_*, go_*). ($OPNSENSE_EXPORTER_DISABLE_EXPORTER_METRICS)
--runtime.gomaxprocs=2 The target number of CPUs that the Go runtime will run on (GOMAXPROCS) ($GOMAXPROCS)
--exporter.instance-label=EXPORTER.INSTANCE-LABEL
Label to use to identify the instance in every metric. If you have multiple instances of the exporter, you can differentiate them by using different value in this flag, that represents the instance of the target OPNsense.
($OPNSENSE_EXPORTER_INSTANCE_LABEL)
--[no-]exporter.disable-arp-table
Disable the scraping of the ARP table ($OPNSENSE_EXPORTER_DISABLE_ARP_TABLE)
--[no-]exporter.disable-cron-table
Disable the scraping of the cron table ($OPNSENSE_EXPORTER_DISABLE_CRON_TABLE)
--opnsense.protocol=OPNSENSE.PROTOCOL
Protocol to use to connect to OPNsense API. One of: [http, https] ($OPNSENSE_EXPORTER_OPS_PROTOCOL)
--opnsense.address=OPNSENSE.ADDRESS
Hostname or IP address of OPNsense API ($OPNSENSE_EXPORTER_OPS_API)
--opnsense.api-key=OPNSENSE.API-KEY
API key to use to connect to OPNsense API ($OPNSENSE_EXPORTER_OPS_API_KEY)
--opnsense.api-secret=OPNSENSE.API-SECRET
API secret to use to connect to OPNsense API ($OPNSENSE_EXPORTER_OPS_API_SECRET)
--[no-]opnsense.insecure Disable TLS certificate verification ($OPNSENSE_EXPORTER_OPS_INSECURE)
--[no-]web.systemd-socket Use systemd socket activation listeners instead of port listeners (Linux only).
--web.listen-address=:8080 ...
Addresses on which to expose metrics and web interface. Repeatable for multiple addresses.
--web.config.file="" [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication. See: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md
```
## Grafana Dashboard
**TODO**

36
go.mod Normal file
View file

@ -0,0 +1,36 @@
module github.com/st3ga/opnsense-exporter
go 1.21
require (
github.com/alecthomas/kingpin/v2 v2.3.2
github.com/go-kit/log v0.2.1
github.com/prometheus/client_golang v1.17.0
github.com/prometheus/common v0.44.0
github.com/prometheus/exporter-toolkit v0.10.0
)
require (
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 // indirect
github.com/prometheus/procfs v0.11.1 // indirect
github.com/xhit/go-str2duration/v2 v2.1.0 // indirect
golang.org/x/crypto v0.8.0 // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/oauth2 v0.8.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.12.0 // indirect
golang.org/x/text v0.9.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

91
go.sum Normal file
View file

@ -0,0 +1,91 @@
github.com/alecthomas/kingpin/v2 v2.3.2 h1:H0aULhgmSzN8xQ3nX1uxtdlTHYoPLu5AhHxWrKI6ocU=
github.com/alecthomas/kingpin/v2 v2.3.2/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q=
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY=
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 h1:v7DLqVdK4VrYkVD5diGdl4sxJurKJEMnODWRJlxV9oM=
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/exporter-toolkit v0.10.0 h1:yOAzZTi4M22ZzVxD+fhy1URTuNRj/36uQJJ5S8IPza8=
github.com/prometheus/exporter-toolkit v0.10.0/go.mod h1:+sVFzuvV5JDyw+Ih6p3zFxZNVnKQa3x5qPmDSiPu4ZY=
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI=
github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8=
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View file

@ -0,0 +1,88 @@
package collector
import (
"fmt"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type arpTableCollector struct {
log log.Logger
subsystem string
instance string
entries *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &arpTableCollector{
subsystem: "arp_table",
})
}
func (c *arpTableCollector) Name() string {
return c.subsystem
}
func (c *arpTableCollector) Register(namespace, instance string, log log.Logger) {
c.log = log
c.instance = instance
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
c.entries = buildPrometheusDesc(c.subsystem, "entries",
"Arp entries by ip, mac, hostname, interface description, type, expired and permanent",
[]string{"ip", "mac", "hostname", "interface_description", "type", "expired", "permanent"},
)
// c.protocolStatistics = map[string]*prometheus.Desc{
// "arpSentRequests": buildPrometheusDesc(c.subsystem, "sent_requests_total",
// "Total number of sent arp requests.", nil),
// "arpReceivedRequests": buildPrometheusDesc(c.subsystem, "received_requests_total",
// "Total number of received arp requests", nil),
// "arpSentReplies": buildPrometheusDesc(c.subsystem, "sent_replies_total",
// "Total number of sent arp replies since OPNsense start.", nil),
// "arpReceivedReplies": buildPrometheusDesc(c.subsystem, "received_replies_total",
// "Total number of received arp replies", nil),
// "arpDroppedDuplicateAddress": buildPrometheusDesc(c.subsystem, "dropped_duplicate_address_total",
// "Total number of dropped arp requests due to duplicate address", nil),
// "arpEntriesTimeout": buildPrometheusDesc(c.subsystem, "entries_timeout_total",
// "Total number of arp entries that timed out", nil),
// "arpDroppedNoEntry": buildPrometheusDesc(c.subsystem, "dropped_no_entry_total",
// "Total number of dropped arp requests due to no entry", nil),
// }
}
func (c *arpTableCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.entries
}
func (c *arpTableCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
data, err := client.FetchArpTable()
if err != nil {
return err
}
for _, arp := range data.Arp {
ch <- prometheus.MustNewConstMetric(
c.entries,
prometheus.GaugeValue,
1,
arp.IP,
arp.Mac,
arp.Hostname,
arp.IntfDescription,
arp.Type,
fmt.Sprintf("%t", arp.Expired),
fmt.Sprintf("%t", arp.Permanent),
c.instance,
)
}
return nil
}

View file

@ -0,0 +1,148 @@
package collector
import (
"errors"
"fmt"
"sync"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
const namespace = "opnsense"
// CollectorInstance is the interface a service specific collectors must implement.
type CollectorInstance interface {
Register(namespace, isntance string, log log.Logger)
Name() string
Describe(ch chan<- *prometheus.Desc)
Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError
}
// collectorInstances is a list of collectorInstances that will be registered
// from the init() function in each collector file
var collectorInstances []CollectorInstance
type Collector struct {
instanceLabel string
mutex sync.RWMutex
Client *opnsense.Client
log log.Logger
collectors []CollectorInstance
scrapes prometheus.CounterVec
endpointErrors prometheus.CounterVec
}
type Option func(*Collector) error
// withoutCollectorInstance removes a collector by given name from the list of collectors
// that are registered from their init functions.
func withoutCollectorInstance(name string) Option {
return func(o *Collector) error {
for i, collector := range o.collectors {
if collector.Name() == name {
o.collectors = append(o.collectors[:i], o.collectors[i+1:]...)
return nil
}
}
return fmt.Errorf("collector %s not found", name)
}
}
// WithoutArpTableCollector Option
// removes the arp_table collector from the list of collectors
func WithoutArpTableCollector() Option {
return withoutCollectorInstance("arp_table")
}
// WithoutCronCollector Option
// removes the cron collector from the list of collectors
func WithoutCronCollector() Option {
return withoutCollectorInstance("cron")
}
// New creates a new Collector instance.
func New(client *opnsense.Client, log log.Logger, instanceName string, options ...Option) (*Collector, error) {
c := Collector{
Client: client,
log: log,
instanceLabel: instanceName,
collectors: collectorInstances,
}
for _, option := range options {
if err := option(&c); err != nil {
return nil, errors.Join(err, fmt.Errorf("failed to apply option"))
}
}
for _, collector := range c.collectors {
collector.Register(namespace, instanceName, c.log)
}
c.scrapes = *prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: namespace,
Name: "exporter_scrapes_total",
Help: "Total number of times OPNsense was scraped for metrics.",
}, []string{"opnsense_instance"})
c.endpointErrors = *prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: namespace,
Name: "exporter_endpoint_errors_total",
Help: "Total number of errors by endpoint returned by the OPNsense API during data fetching",
}, []string{"endpoint", "opnsense_instance"})
prometheus.MustRegister(c.scrapes)
prometheus.MustRegister(c.endpointErrors)
c.scrapes.WithLabelValues(c.instanceLabel).Add(0)
for _, path := range c.Client.Endpoints() {
c.endpointErrors.WithLabelValues(string(path), c.instanceLabel).Add(0)
}
return &c, nil
}
// Describe implements the prometheus.Collector interface.
func (c *Collector) Describe(ch chan<- *prometheus.Desc) {
c.scrapes.Describe(ch)
c.endpointErrors.Describe(ch)
for _, collector := range c.collectors {
collector.Describe(ch)
}
}
// Collect implements the prometheus.Collector interface.
func (c *Collector) Collect(ch chan<- prometheus.Metric) {
c.mutex.Lock()
defer c.mutex.Unlock()
var wg sync.WaitGroup
wg.Add(len(c.collectors))
for _, collector := range c.collectors {
go func(coll CollectorInstance) {
if err := coll.Update(c.Client, ch); err != nil {
level.Error(c.log).Log(
"msg", "failed to update",
"component", "collector",
"collector_name", coll.Name(),
"err", err,
)
c.endpointErrors.WithLabelValues(err.Endpoint, c.instanceLabel).Inc()
}
wg.Done()
}(collector)
}
wg.Wait()
c.scrapes.WithLabelValues(c.instanceLabel).Inc()
c.scrapes.Collect(ch)
c.endpointErrors.Collect(ch)
}

View file

@ -0,0 +1,33 @@
package collector
import (
"testing"
"github.com/go-kit/log"
"github.com/st3ga/opnsense-exporter/opnsense"
)
func TestWithoutArpCollector(t *testing.T) {
client, err := opnsense.NewClient(
"test",
"test",
"test",
"test",
"test",
false,
log.NewNopLogger(),
)
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
collector, err := New(&client, log.NewNopLogger(), "test", WithoutArpTableCollector())
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
for _, c := range collector.collectors {
if c.Name() == "arp_table" {
t.Errorf("Expected no arp collector, but it was found")
}
}
}

View file

@ -0,0 +1,61 @@
package collector
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type cronCollector struct {
log log.Logger
subsystem string
instance string
jobsStatus *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &cronCollector{
subsystem: "cron",
})
}
func (c *cronCollector) Name() string {
return c.subsystem
}
func (c *cronCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
c.jobsStatus = buildPrometheusDesc(c.subsystem, "job_status",
"Cron job status by name and description (1 = enabled, 0 = disabled)",
[]string{"schedule", "description", "command", "origin"},
)
}
func (c *cronCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.jobsStatus
}
func (c *cronCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
crons, err := client.FetchCronTable()
if err != nil {
return err
}
for _, cron := range crons.Cron {
ch <- prometheus.MustNewConstMetric(
c.jobsStatus,
prometheus.GaugeValue,
float64(cron.Status),
cron.Schedule,
cron.Description,
cron.Command,
cron.Origin,
c.instance,
)
}
return nil
}

View file

@ -0,0 +1,105 @@
package collector
import (
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type gatewaysCollector struct {
log log.Logger
subsystem string
instance string
status *prometheus.Desc
lossPercentage *prometheus.Desc
rtt *prometheus.Desc
rttd *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &gatewaysCollector{
subsystem: "gateways",
})
}
func (c *gatewaysCollector) Name() string {
return c.subsystem
}
func (c *gatewaysCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
c.status = buildPrometheusDesc(c.subsystem, "status",
"Status of the gateway by name and address (1 = up, 0 = down, 2 = unkown)",
[]string{"name", "address"},
)
c.lossPercentage = buildPrometheusDesc(
c.subsystem, "loss_percentage",
"The current gateway loss percentage by name and address",
[]string{"name", "adress"},
)
c.rtt = buildPrometheusDesc(
c.subsystem, "rtt_milliseconds",
"RTT is the average (mean) of the round trip time in milliseconds by name and address",
[]string{"name", "adress"},
)
c.rttd = buildPrometheusDesc(
c.subsystem, "rttd_milliseconds",
"RTTd is the standard deviation of the round trip time in milliseconds by name and address",
[]string{"name", "adress"},
)
}
func (c *gatewaysCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.status
ch <- c.lossPercentage
ch <- c.rtt
}
func (c *gatewaysCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
data, err := client.FetchGateways()
if err != nil {
return err
}
for _, v := range data.Gateways {
ch <- prometheus.MustNewConstMetric(
c.status,
prometheus.GaugeValue,
float64(v.Status),
v.Name,
v.Address,
c.instance,
)
if v.LossPercentage != -1 {
ch <- prometheus.MustNewConstMetric(
c.lossPercentage,
prometheus.GaugeValue,
float64(v.LossPercentage),
v.Name,
v.Address,
c.instance,
)
}
if v.RTTMilliseconds != -1 {
ch <- prometheus.MustNewConstMetric(
c.rtt,
prometheus.GaugeValue,
v.RTTMilliseconds,
v.Name,
v.Address,
c.instance,
)
}
if v.RTTDMilliseconds != -1 {
ch <- prometheus.MustNewConstMetric(
c.rttd,
prometheus.GaugeValue,
v.RTTDMilliseconds,
v.Name,
v.Address,
c.instance,
)
}
}
return nil
}

View file

@ -0,0 +1,112 @@
package collector
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type interfacesCollector struct {
log log.Logger
subsystem string
instance string
mtu *prometheus.Desc
bytesReceived *prometheus.Desc
bytesTransmited *prometheus.Desc
multicastsTransmitted *prometheus.Desc
multicastsReceived *prometheus.Desc
inputErrors *prometheus.Desc
outputErrors *prometheus.Desc
collisions *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &interfacesCollector{
subsystem: "interfaces",
})
}
func (c *interfacesCollector) Name() string {
return c.subsystem
}
func (c *interfacesCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
c.mtu = buildPrometheusDesc(c.subsystem, "mtu_bytes",
"The MTU value of the interface",
[]string{"interface", "device", "type"},
)
c.bytesReceived = buildPrometheusDesc(c.subsystem, "received_bytes_total",
"Bytes received on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
c.bytesTransmited = buildPrometheusDesc(c.subsystem, "transmited_bytes_total",
"Bytes transmited on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
c.multicastsReceived = buildPrometheusDesc(c.subsystem, "received_multicasts_total",
"Multicasts received on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
c.multicastsTransmitted = buildPrometheusDesc(c.subsystem, "transmited_multicasts_total",
"Multicasts transmited on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
c.inputErrors = buildPrometheusDesc(c.subsystem, "input_errors_total",
"Input errors on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
c.outputErrors = buildPrometheusDesc(c.subsystem, "output_errors_total",
"Output errors on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
c.collisions = buildPrometheusDesc(c.subsystem, "collisions_total",
"Collisions on this interface by interface name and device",
[]string{"interface", "device", "type"},
)
}
func (c *interfacesCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.mtu
ch <- c.bytesReceived
ch <- c.bytesTransmited
ch <- c.multicastsReceived
ch <- c.multicastsTransmitted
ch <- c.inputErrors
ch <- c.outputErrors
ch <- c.collisions
}
func (c *interfacesCollector) update(ch chan<- prometheus.Metric, desc *prometheus.Desc, valueType prometheus.ValueType, value float64, labelValues ...string) {
ch <- prometheus.MustNewConstMetric(
desc, valueType, value, labelValues...,
)
}
func (c *interfacesCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
data, err := client.FetchInterfaces()
if err != nil {
return err
}
for _, iface := range data.Interfaces {
c.update(ch, c.mtu, prometheus.GaugeValue, float64(iface.MTU), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.bytesReceived, prometheus.CounterValue, float64(iface.BytesReceived), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.bytesTransmited, prometheus.CounterValue, float64(iface.BytesTransmitted), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.multicastsReceived, prometheus.CounterValue, float64(iface.MulticastsReceived), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.multicastsTransmitted, prometheus.CounterValue, float64(iface.MulticastsTransmitted), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.inputErrors, prometheus.CounterValue, float64(iface.InputErrors), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.outputErrors, prometheus.CounterValue, float64(iface.OutputErrors), iface.Name, iface.Device, iface.Type, c.instance)
c.update(ch, c.collisions, prometheus.CounterValue, float64(iface.Collisions), iface.Name, iface.Device, iface.Type, c.instance)
}
return nil
}

View file

@ -0,0 +1,62 @@
package collector
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type openVPNCollector struct {
log log.Logger
subsystem string
instance string
instances *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &openVPNCollector{
subsystem: "openvpn",
})
}
func (c *openVPNCollector) Name() string {
return c.subsystem
}
func (c *openVPNCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
c.instances = buildPrometheusDesc(c.subsystem, "instances",
"OpenVPN instances (1 = enabled, 0 = disabled) by role (server, client)",
[]string{"uuid", "role", "description", "device_type"},
)
}
func (c *openVPNCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.instances
}
func (c *openVPNCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
instances, err := client.FetchOpenVPNInstances()
if err != nil {
return err
}
for _, instance := range instances.Rows {
ch <- prometheus.MustNewConstMetric(
c.instances,
prometheus.GaugeValue,
float64(instance.Enabled),
instance.UUID,
instance.Role,
instance.Description,
instance.DevType,
c.instance,
)
}
return nil
}

View file

@ -0,0 +1,40 @@
package collector
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type protocolCollector struct {
log log.Logger
subsystem string
instance string
}
func init() {
collectorInstances = append(collectorInstances, &protocolCollector{
subsystem: "proto_statistics",
})
}
func (c *protocolCollector) Name() string {
return c.subsystem
}
func (c *protocolCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
}
func (c *protocolCollector) Describe(ch chan<- *prometheus.Desc) {
}
func (c *protocolCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
return nil
}

View file

@ -0,0 +1,90 @@
package collector
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type servicesCollector struct {
log log.Logger
subsystem string
instance string
services *prometheus.Desc
servicesRunning *prometheus.Desc
servicesStopped *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &servicesCollector{
subsystem: "services",
})
}
func (c *servicesCollector) Name() string {
return c.subsystem
}
func (c *servicesCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
c.services = buildPrometheusDesc(c.subsystem, "status",
"Service status by name and description (1 = running, 0 = stopped)",
[]string{"name", "description"},
)
c.servicesRunning = buildPrometheusDesc(c.subsystem, "running_total",
"Total number of running services",
nil,
)
c.servicesStopped = buildPrometheusDesc(c.subsystem, "stopped_total",
"Total number of stopped services",
nil,
)
}
func (c *servicesCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.services
ch <- c.servicesRunning
ch <- c.servicesStopped
}
func (c *servicesCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
services, err := client.FetchServices()
if err != nil {
return err
}
ch <- prometheus.MustNewConstMetric(
c.servicesRunning, prometheus.GaugeValue,
float64(services.TotalRunning),
c.instance,
)
ch <- prometheus.MustNewConstMetric(
c.servicesStopped, prometheus.GaugeValue,
float64(services.TotalStopped),
c.instance,
)
for _, service := range services.Services {
ch <- prometheus.MustNewConstMetric(
c.services, prometheus.GaugeValue,
float64(service.Status),
service.Name,
service.Description,
c.instance,
)
}
return nil
}

View file

@ -0,0 +1,57 @@
package collector
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/st3ga/opnsense-exporter/opnsense"
)
type unboundDNSCollector struct {
log log.Logger
subsystem string
instance string
uptime *prometheus.Desc
}
func init() {
collectorInstances = append(collectorInstances, &unboundDNSCollector{
subsystem: "unbound_dns",
})
}
func (c *unboundDNSCollector) Name() string {
return c.subsystem
}
func (c *unboundDNSCollector) Register(namespace, instanceLabel string, log log.Logger) {
c.log = log
c.instance = instanceLabel
level.Debug(c.log).
Log("msg", "Registering collector", "collector", c.Name())
c.uptime = buildPrometheusDesc(c.subsystem, "uptime_seconds",
"Uptime of the unbound DNS service in seconds",
nil,
)
}
func (c *unboundDNSCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.uptime
}
func (c *unboundDNSCollector) Update(client *opnsense.Client, ch chan<- prometheus.Metric) *opnsense.APICallError {
data, err := client.FetchUnboundOverview()
if err != nil {
return err
}
ch <- prometheus.MustNewConstMetric(
c.uptime,
prometheus.GaugeValue,
float64(data.UptimeSeconds),
c.instance,
)
return nil
}

View file

@ -0,0 +1,22 @@
package collector
import (
"github.com/prometheus/client_golang/prometheus"
)
const instanceLabelName = "opnsense_instance"
func buildPrometheusDesc(subsystem, name, help string, labels []string) *prometheus.Desc {
if labels != nil {
labels = append(labels, instanceLabelName)
} else {
labels = []string{instanceLabelName}
}
return prometheus.NewDesc(
prometheus.BuildFQName(namespace, subsystem, name),
help,
labels,
nil,
)
}

202
main.go Normal file
View file

@ -0,0 +1,202 @@
package main
import (
"fmt"
"net/http"
"os"
"os/signal"
"runtime"
"syscall"
"github.com/alecthomas/kingpin/v2"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
promcollectors "github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/prometheus/common/promlog"
"github.com/prometheus/exporter-toolkit/web"
"github.com/prometheus/exporter-toolkit/web/kingpinflag"
"github.com/st3ga/opnsense-exporter/internal/collector"
"github.com/st3ga/opnsense-exporter/opnsense"
)
var version = ""
func main() {
var (
logLevel = kingpin.Flag(
"log.level",
"Log level. One of: [debug, info, warn, error]").
Default("info").
String()
logFormat = kingpin.Flag(
"log.format",
"Log format. One of: [logfmt, json]").
Default("logfmt").
String()
metricsPath = kingpin.Flag(
"web.telemetry-path",
"Path under which to expose metrics.",
).Default("/metrics").String()
disableExporterMetrics = kingpin.Flag(
"web.disable-exporter-metrics",
"Exclude metrics about the exporter itself (promhttp_*, process_*, go_*).",
).Envar("OPNSENSE_EXPORTER_DISABLE_EXPORTER_METRICS").Bool()
maxProcs = kingpin.Flag(
"runtime.gomaxprocs",
"The target number of CPUs that the Go runtime will run on (GOMAXPROCS)",
).Envar("GOMAXPROCS").Default("2").Int()
instanceLabel = kingpin.Flag(
"exporter.instance-label",
"Label to use to identify the instance in every metric. "+
"If you have multiple instances of the exporter, you can differentiate them by using "+
"different value in this flag, that represents the instance of the target OPNsense.",
).Envar("OPNSENSE_EXPORTER_INSTANCE_LABEL").Required().String()
arpTableCollectorDisabled = kingpin.Flag(
"exporter.disable-arp-table",
"Disable the scraping of the ARP table",
).Envar("OPNSENSE_EXPORTER_DISABLE_ARP_TABLE").Default("false").Bool()
cronTableCollectorDisabled = kingpin.Flag(
"exporter.disable-cron-table",
"Disable the scraping of the cron table",
).Envar("OPNSENSE_EXPORTER_DISABLE_CRON_TABLE").Default("false").Bool()
opnsenseProtocol = kingpin.Flag(
"opnsense.protocol",
"Protocol to use to connect to OPNsense API. One of: [http, https]",
).Envar("OPNSENSE_EXPORTER_OPS_PROTOCOL").Required().String()
opnsenseAPI = kingpin.Flag(
"opnsense.address",
"Hostname or IP address of OPNsense API",
).Envar("OPNSENSE_EXPORTER_OPS_API").Required().String()
opnsenseAPIKey = kingpin.Flag(
"opnsense.api-key",
"API key to use to connect to OPNsense API",
).Envar("OPNSENSE_EXPORTER_OPS_API_KEY").Required().String()
opnsenseAPISecret = kingpin.Flag(
"opnsense.api-secret",
"API secret to use to connect to OPNsense API",
).Envar("OPNSENSE_EXPORTER_OPS_API_SECRET").Required().String()
opnsenseInsecure = kingpin.Flag(
"opnsense.insecure",
"Disable TLS certificate verification",
).Envar("OPNSENSE_EXPORTER_OPS_INSECURE").Default("false").Bool()
webConfig = kingpinflag.AddFlags(kingpin.CommandLine, ":8080")
)
kingpin.CommandLine.UsageWriter(os.Stdout)
kingpin.HelpFlag.Short('h')
kingpin.Parse()
promlogConfig := &promlog.Config{
Level: &promlog.AllowedLevel{},
Format: &promlog.AllowedFormat{},
}
promlogConfig.Level.Set(*logLevel)
promlogConfig.Format.Set(*logFormat)
logger := promlog.New(promlogConfig)
level.Info(logger).
Log("msg", "Starting opnsense-exporter", "version", version)
runtime.GOMAXPROCS(*maxProcs)
level.Debug(logger).
Log("msg", "settings Go MAXPROCS", "procs", runtime.GOMAXPROCS(0))
opnsenseClient, err := opnsense.NewClient(
*opnsenseProtocol,
*opnsenseAPI,
*opnsenseAPIKey,
*opnsenseAPISecret,
version,
*opnsenseInsecure,
logger,
)
if err != nil {
level.Error(logger).
Log("msg", "opnsense client build failed", "err", err)
os.Exit(1)
}
level.Debug(logger).Log(
"msg", fmt.Sprintf("OPNsense registered endpoints %s", opnsenseClient.Endpoints()),
)
r := prometheus.NewRegistry()
if !*disableExporterMetrics {
r.MustRegister(
promcollectors.NewProcessCollector(promcollectors.ProcessCollectorOpts{}),
)
r.MustRegister(promcollectors.NewGoCollector())
}
collectorOptionFuncs := []collector.Option{}
if *arpTableCollectorDisabled {
collectorOptionFuncs = append(collectorOptionFuncs, collector.WithoutArpTableCollector())
}
if *cronTableCollectorDisabled {
collectorOptionFuncs = append(collectorOptionFuncs, collector.WithoutCronCollector())
}
collectorInstance, err := collector.New(&opnsenseClient, logger, *instanceLabel, collectorOptionFuncs...)
if err != nil {
level.Error(logger).
Log("msg", "failed to construct the collecotr", "err", err)
os.Exit(1)
}
r.MustRegister(collectorInstance)
handler := promhttp.HandlerFor(r, promhttp.HandlerOpts{})
http.Handle(*metricsPath, handler)
if *metricsPath != "/" && *metricsPath != "" {
landingConfig := web.LandingConfig{
Name: "OPNsense Exporter",
Description: "Prometheus OPNsense Firewall Exporter",
Version: version,
Links: []web.LandingLinks{
{
Address: *metricsPath,
Text: "Metrics",
},
},
}
landingPage, err := web.NewLandingPage(landingConfig)
if err != nil {
level.Error(logger).Log("err", err)
os.Exit(1)
}
http.Handle("/", landingPage)
}
term := make(chan os.Signal, 1)
srvClose := make(chan struct{})
signal.Notify(term, os.Interrupt, syscall.SIGTERM)
srv := &http.Server{}
go func() {
if err := web.ListenAndServe(srv, webConfig, logger); err != nil {
level.Error(logger).
Log("msg", "Error received from the HTTP server", "err", err)
close(srvClose)
}
}()
for {
select {
case <-term:
level.Info(logger).
Log("msg", "Received SIGTERM, exiting gracefully...")
os.Exit(0)
case <-srvClose:
os.Exit(1)
}
}
}

1
opnsense/acme_client.go Normal file
View file

@ -0,0 +1 @@
package opnsense

77
opnsense/arp_table.go Normal file
View file

@ -0,0 +1,77 @@
package opnsense
import (
"strings"
)
type arpSearchResponse struct {
Total int `json:"total"`
RowCount int `json:"rowCount"`
Current int `json:"current"`
Rows []struct {
Mac string `json:"mac"`
IP string `json:"ip"`
Intf string `json:"intf"`
Expired bool `json:"expired"`
Expires int `json:"expires"`
Permanent bool `json:"permanent"`
Type string `json:"type"`
Manufacturer string `json:"manufacturer"`
Hostname string `json:"hostname"`
IntfDescription string `json:"intf_description"`
} `json:"rows"`
}
type Arp struct {
Mac string
IP string
Expired bool
Expires int
Permanent bool
Type string
Hostname string
IntfDescription string
}
type ArpTable struct {
TotalEntries int
Arp []Arp
}
const fetchArpPayload = `{"current":1,"rowCount":-1,"sort":{},"searchPhrase":"","resolve":"no"}`
func (c *Client) FetchArpTable() (ArpTable, *APICallError) {
var resp arpSearchResponse
var arpTable ArpTable
path, ok := c.endpoints["arp"]
if !ok {
return arpTable, &APICallError{
Endpoint: "arp",
Message: "endpoint not found",
StatusCode: 0,
}
}
if err := c.do("POST", path, strings.NewReader(fetchArpPayload), &resp); err != nil {
return arpTable, err
}
for _, arp := range resp.Rows {
a := Arp{
Mac: arp.Mac,
IP: arp.IP,
Expired: arp.Expired,
Expires: arp.Expires,
Permanent: arp.Permanent,
Type: arp.Type,
Hostname: arp.Hostname,
IntfDescription: arp.IntfDescription,
}
arpTable.Arp = append(arpTable.Arp, a)
}
arpTable.TotalEntries = resp.Total
return arpTable, nil
}

192
opnsense/client.go Normal file
View file

@ -0,0 +1,192 @@
package opnsense
import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"regexp"
"runtime"
"time"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
)
// MaxRetries is the maximum number of retries
// when a request to the OPNsense API fails
const MaxRetries = 3
// EndpointName is the custom type for name of an endpoint definition
type EndpointName string
// EndpointPath is the custom type for url path of an endpoint definition
type EndpointPath string
// Client is an OPNsense API client
type Client struct {
log log.Logger
baseURL string
key string
secret string
sslInsecure bool
endpoints map[EndpointName]EndpointPath
httpClient *http.Client
headers map[string]string
gatewayLossRegex *regexp.Regexp
gatewayRTTRegex *regexp.Regexp
}
// NewClient creates a new OPNsense API Client
func NewClient(protocol, adress, key, secret, userAgentVersion string, sslInsecure bool, log log.Logger) (Client, error) {
sslPool, err := x509.SystemCertPool()
if err != nil {
return Client{}, errors.Join(fmt.Errorf("failed to load system cert pool"), err)
}
gatewayLossRegex, err := regexp.Compile(`\d\.\d %`)
if err != nil {
return Client{}, errors.Join(fmt.Errorf("failed to build regex for gatewayLoss calculation"), err)
}
gatewayRTTRegex, err := regexp.Compile(`\d+\.\d+ ms`)
if err != nil {
return Client{}, errors.Join(fmt.Errorf("failed to build regex for gatewayRTT calculation"), err)
}
client := Client{
log: log,
baseURL: fmt.Sprintf("%s://%s", protocol, adress),
key: key,
secret: secret,
gatewayLossRegex: gatewayLossRegex,
gatewayRTTRegex: gatewayRTTRegex,
endpoints: map[EndpointName]EndpointPath{
"services": "api/core/service/search",
"protocolStatistics": "api/diagnostics/interface/getProtocolStatistics",
"arp": "api/diagnostics/interface/search_arp",
"dhcpv4": "api/dhcpv4/leases/searchLease",
"openVPNInstances": "api/openvpn/instances/search",
"interfaces": "api/diagnostics/traffic/interface",
"systemInfo": "widgets/api/get.php?load=system%2Ctemperature",
"gatewaysStatus": "api/routes/gateway/status",
"unboundDNSStatus": "api/unbound/diagnostics/stats",
"cronJobs": "api/cron/settings/searchJobs",
},
headers: map[string]string{
"Accept": "application/json",
"User-Agent": fmt.Sprintf("prometheus-opnsense-exporter/%s", userAgentVersion),
"Accept-Encoding": "gzip, deflate, br",
},
sslInsecure: sslInsecure,
httpClient: &http.Client{
Timeout: 10 * time.Second,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: sslInsecure,
RootCAs: sslPool,
},
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 1 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
ForceAttemptHTTP2: true,
MaxIdleConnsPerHost: runtime.GOMAXPROCS(0) + 1,
},
},
}
return client, nil
}
// Endpoints returns a map of all the endpoints
// that are called by the client.
func (c *Client) Endpoints() map[EndpointName]EndpointPath {
return c.endpoints
}
// do sends a request to the OPNsense API.
// The response is unmarshalled
// into the responseStruc
func (c *Client) do(method string, path EndpointPath, body io.Reader, responseStruct any) *APICallError {
url := fmt.Sprintf("%s/%s", c.baseURL, string(path))
req, err := http.NewRequest(method, url, body)
if err != nil {
return &APICallError{
Endpoint: string(path),
Message: err.Error(),
StatusCode: 0,
}
}
req.SetBasicAuth(c.key, c.secret)
for k, v := range c.headers {
req.Header.Add(k, v)
}
if method == "POST" {
req.Header.Add("Content-Type", "application/json;charset=utf-8")
}
level.Debug(c.log).
Log("msg", "fetching data", "component", "opnsense-client", "url", url, "method", method)
// Retry the request up to MaxRetries times
for i := 0; i < MaxRetries; i++ {
resp, err := c.httpClient.Do(req)
if err != nil {
level.Error(c.log).
Log("msg", "failed to send request; retrying",
"component", "opnsense-client",
"err", err.Error())
time.Sleep(25 * time.Millisecond)
continue
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return &APICallError{
Endpoint: string(path),
Message: fmt.Sprintf("failed to read response body: %s", err.Error()),
StatusCode: resp.StatusCode,
}
}
resp.Body.Close()
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
err := json.Unmarshal(body, &responseStruct)
if err != nil {
fmt.Println(url)
fmt.Println(string(body))
return &APICallError{
Endpoint: string(path),
Message: fmt.Sprintf("failed to unmarshal response body: %s", err.Error()),
StatusCode: resp.StatusCode,
}
}
return nil
} else {
return &APICallError{
Endpoint: string(path),
Message: string(body),
StatusCode: resp.StatusCode,
}
}
}
return &APICallError{
Endpoint: string(path),
Message: fmt.Sprintf("max retries of %d times reached", MaxRetries),
StatusCode: 0,
}
}

89
opnsense/cron.go Normal file
View file

@ -0,0 +1,89 @@
package opnsense
import (
"fmt"
"strings"
"github.com/go-kit/log/level"
)
type cronSearchResponse struct {
Rows []struct {
UUID string `json:"uuid"`
Enabled string `json:"enabled"`
Minutes string `json:"minutes"`
Hours string `json:"hours"`
Days string `json:"days"`
Months string `json:"months"`
Weekdays string `json:"weekdays"`
Description string `json:"description"`
Command string `json:"command"`
Origin string `json:"origin"`
} `json:"rows"`
RowCount int `json:"rowCount"`
Total int `json:"total"`
Current int `json:"current"`
}
type CronStatus int
const (
CronStatusDisabled CronStatus = iota
CronStatusEnabled
)
type Cron struct {
UUID string
Status CronStatus
Schedule string
Description string
Command string
Origin string
}
type CronTable struct {
TotalEntries int
Cron []Cron
}
const fetchCronPayload = `{"current":1,"rowCount":-1,"sort":{},"searchPhrase":"","resolve":"no"}`
func (c *Client) FetchCronTable() (CronTable, *APICallError) {
var resp cronSearchResponse
var cronTable CronTable
path, ok := c.endpoints["cronJobs"]
if !ok {
return cronTable, &APICallError{
Endpoint: "cron",
Message: "endpoint not found",
StatusCode: 0,
}
}
if err := c.do("POST", path, strings.NewReader(fetchCronPayload), &resp); err != nil {
return cronTable, err
}
for _, cron := range resp.Rows {
cronTable.TotalEntries++
intStatus, err := parseStringToInt(cron.Enabled, path)
if err != nil {
level.Warn(c.log).
Log("msg", "unable to parse cron entry status", "err", err)
continue
}
cronTable.Cron = append(cronTable.Cron, Cron{
UUID: cron.UUID,
Status: CronStatus(intStatus),
Description: cron.Description,
Schedule: fmt.Sprintf("%s %s %s %s %s", cron.Minutes, cron.Hours, cron.Days, cron.Months, cron.Weekdays),
Command: cron.Command,
Origin: cron.Origin,
})
}
return cronTable, nil
}

1
opnsense/dhcpv4.go Normal file
View file

@ -0,0 +1 @@
package opnsense

16
opnsense/errors.go Normal file
View file

@ -0,0 +1,16 @@
package opnsense
import "fmt"
// APICallError is an error returned by the OPNsense API
type APICallError struct {
Endpoint string
StatusCode int
Message string
}
func (e APICallError) Error() string {
return fmt.Sprintf(
"opnsense-client api call error: endpoint: %s; failed status code: %d; msg: %s", e.Endpoint, e.StatusCode, e.Message,
)
}

94
opnsense/gateways.go Normal file
View file

@ -0,0 +1,94 @@
package opnsense
import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
)
// gatewaysStatusResponse is the response from the OPNsense API that contains the gateways status details
// The data is constucted in this script:
// ---> https://github.com/opnsense/core/blob/master/src/opnsense/scripts/routes/gateway_status.php
// Following the reverse engineering of the call:
// ---> https://github.com/opnsense/core/blob/master/src/etc/inc/plugins.inc.d/dpinger.inc#L368
// From this file we know that Loss and Delay always have the same format of '%0.1f ms'
type gatewaysStatusResponse struct {
Items []struct {
Name string `json:"name"`
Address string `json:"address"`
Status string `json:"status"`
Loss string `json:"loss"`
Delay string `json:"delay"`
Stddev string `json:"stddev"`
StatusTranslated string `json:"status_translated"`
} `json:"items"`
Status string `json:"status"`
}
// GatewayStatus is the custom type that represents the status of a gateway
type GatewayStatus int
const (
GatewayStatusOffline GatewayStatus = iota
GatewayStatusOnline
GatewayStatusUnknown
)
type Gateway struct {
Name string
Address string
Status GatewayStatus
RTTMilliseconds float64
RTTDMilliseconds float64
LossPercentage float64
}
type Gateways struct {
Gateways []Gateway
}
// parseGatewayStatus parses a string status to a GatewayStatus type.
func parseGatewayStatus(statusTranslated string, logger log.Logger, originalStatus string) GatewayStatus {
switch statusTranslated {
case "Online":
return GatewayStatusOnline
case "Offline":
return GatewayStatusOffline
default:
level.Warn(logger).
Log("msg", "unknown gateway status detected", "status", originalStatus)
return GatewayStatusUnknown
}
}
// FetchGateways fetches the gateways status details from the OPNsense API
// and returns a safe wrapper Gateways struct.
func (c *Client) FetchGateways() (Gateways, *APICallError) {
var resp gatewaysStatusResponse
var data Gateways
url, ok := c.endpoints["gatewaysStatus"]
if !ok {
return data, &APICallError{
Endpoint: "gatewaysStatus",
Message: "endpoint not found in client endpoints",
StatusCode: 0,
}
}
err := c.do("GET", url, nil, &resp)
if err != nil {
return data, err
}
for _, v := range resp.Items {
data.Gateways = append(data.Gateways, Gateway{
Name: v.Name,
Address: v.Address,
Status: parseGatewayStatus(v.StatusTranslated, c.log, v.Status),
RTTMilliseconds: parseStringToFloatWithReplace(v.Delay, c.gatewayRTTRegex, " ms", "rtt", c.log),
RTTDMilliseconds: parseStringToFloatWithReplace(v.Stddev, c.gatewayRTTRegex, " ms", "rttd", c.log),
LossPercentage: parseStringToFloatWithReplace(v.Loss, c.gatewayLossRegex, " %", "loss", c.log),
})
}
return data, nil
}

136
opnsense/interfaces.go Normal file
View file

@ -0,0 +1,136 @@
package opnsense
// TODO: Add client fetching
type InterfaceDetails struct {
Device string `json:"device"`
Driver string `json:"driver"`
Index string `json:"index"`
Flags string `json:"flags"`
PromiscuousListeners string `json:"promiscuous listeners"`
SendQueueLength string `json:"send queue length"`
SendQueueMaxLength string `json:"send queue max length"`
SendQueueDrops string `json:"send queue drops"`
Type string `json:"type"`
AddressLength string `json:"address length"`
HeaderLength string `json:"header length"`
LinkState string `json:"link state"`
Vhid string `json:"vhid"`
Datalen string `json:"datalen"`
MTU string `json:"mtu"`
Metric string `json:"metric"`
LineRate string `json:"line rate"`
PacketsReceived string `json:"packets received"`
PacketsTransmitted string `json:"packets transmitted"`
BytesReceived string `json:"bytes received"`
BytesTransmitted string `json:"bytes transmitted"`
OutputErrors string `json:"output errors"`
InputErrors string `json:"input errors"`
Collisions string `json:"collisions"`
MulticastsReceived string `json:"multicasts received"`
MulticastsTransmitted string `json:"multicasts transmitted"`
InputQueueDrops string `json:"input queue drops"`
PacketsForUnknownProtocol string `json:"packets for unknown protocol"`
HWOffloadCapabilities string `json:"HW offload capabilities"`
UptimeAtAttachOrStatReset string `json:"uptime at attach or stat reset"`
Name string `json:"name"`
}
// Interface is the struct returned by the OPNsense API
// when requesting the interfaces. The response is weird json
// that have the interface name as key and the interfaceDetails struct as value
type interfaceResponse struct {
Interface map[string]InterfaceDetails `json:"interfaces"`
}
type Interface struct {
Name string
Device string
Type string
MTU int
PacketsReceived int
PacketsTransmitted int
BytesReceived int
BytesTransmitted int
MulticastsReceived int
MulticastsTransmitted int
InputErrors int
OutputErrors int
Collisions int
}
type Interfaces struct {
Interfaces []Interface
}
// sliceIntToMapStringInt is a helper function to convert a slice of strings to a map of string:int
// The key of the map is the string value in the slice and
// the value of the map is the int value of the string.
func sliceIntToMapStringInt(strings []string, url EndpointPath) (map[string]int, *APICallError) {
ints := make(map[string]int)
for _, str := range strings {
value, err := parseStringToInt(str, url)
if err != nil {
return nil, err
}
ints[str] = value
}
return ints, nil
}
func (c *Client) FetchInterfaces() (Interfaces, *APICallError) {
var resp interfaceResponse
var data Interfaces
url, ok := c.endpoints["interfaces"]
if !ok {
return data, &APICallError{
Endpoint: "arp",
Message: "endpoint not found in client endpoints",
StatusCode: 0,
}
}
err := c.do("GET", url, nil, &resp)
if err != nil {
return data, err
}
for _, v := range resp.Interface {
convertedValues, err := sliceIntToMapStringInt(
[]string{
v.MTU, v.BytesReceived, v.BytesTransmitted,
v.PacketsReceived, v.PacketsTransmitted,
v.MulticastsReceived, v.MulticastsTransmitted,
v.InputErrors, v.OutputErrors,
v.Collisions,
},
url,
)
if err != nil {
return data, err
}
data.Interfaces = append(data.Interfaces, Interface{
Name: v.Name,
Device: v.Device,
Type: v.Type,
MTU: convertedValues[v.MTU],
BytesReceived: convertedValues[v.BytesReceived],
BytesTransmitted: convertedValues[v.BytesTransmitted],
PacketsReceived: convertedValues[v.PacketsReceived],
PacketsTransmitted: convertedValues[v.PacketsTransmitted],
MulticastsReceived: convertedValues[v.MulticastsReceived],
MulticastsTransmitted: convertedValues[v.MulticastsTransmitted],
InputErrors: convertedValues[v.InputErrors],
OutputErrors: convertedValues[v.OutputErrors],
Collisions: convertedValues[v.Collisions],
})
}
return data, nil
}

63
opnsense/openvpn.go Normal file
View file

@ -0,0 +1,63 @@
package opnsense
import "strings"
const fetchOpenVPNPayload = `{"current":1,"rowCount":-1,"sort":{},"searchPhrase":""}`
type openVPNSearchResponse struct {
Rows []struct {
UUID string `json:"uuid"`
Description string `json:"description"`
Role string `json:"role"`
DevType string `json:"dev_type"`
Enabled string `json:"enabled"`
} `json:"rows"`
RowCount int `json:"rowCount"`
Total int `json:"total"`
Current int `json:"current"`
}
type OpenVPN struct {
UUID string
Description string
Role string
DevType string
Enabled int
}
type OpenVPNInstances struct {
Rows []OpenVPN
}
func (c *Client) FetchOpenVPNInstances() (OpenVPNInstances, *APICallError) {
var resp openVPNSearchResponse
var data OpenVPNInstances
url, ok := c.endpoints["openVPNInstances"]
if !ok {
return data, &APICallError{
Endpoint: "openvpn",
Message: "endpoint not found in client endpoints",
StatusCode: 0,
}
}
if err := c.do("POST", url, strings.NewReader(fetchOpenVPNPayload), &resp); err != nil {
return data, err
}
for _, v := range resp.Rows {
enabled, err := parseStringToInt(v.Enabled, url)
if err != nil {
return data, err
}
data.Rows = append(data.Rows, OpenVPN{
UUID: v.UUID,
Description: v.Description,
Role: strings.ToLower(v.Role),
DevType: v.DevType,
Enabled: enabled,
})
}
return data, nil
}

View file

@ -0,0 +1,281 @@
package opnsense
type protocolStatisticsResponse struct {
Statistics struct {
TCP struct {
SentPackets int `json:"sent-packets"`
SentDataPackets int `json:"sent-data-packets"`
SentDataBytes int `json:"sent-data-bytes"`
SentRetransmittedPackets int `json:"sent-retransmitted-packets"`
SentRetransmittedBytes int `json:"sent-retransmitted-bytes"`
SentUnnecessaryRetransmittedPackets int `json:"sent-unnecessary-retransmitted-packets"`
SentResendsByMtuDiscovery int `json:"sent-resends-by-mtu-discovery"`
SentAckOnlyPackets int `json:"sent-ack-only-packets"`
SentPacketsDelayed int `json:"sent-packets-delayed"`
SentUrgOnlyPackets int `json:"sent-urg-only-packets"`
SentWindowProbePackets int `json:"sent-window-probe-packets"`
SentWindowUpdatePackets int `json:"sent-window-update-packets"`
SentControlPackets int `json:"sent-control-packets"`
ReceivedPackets int `json:"received-packets"`
ReceivedAckPackets int `json:"received-ack-packets"`
ReceivedAckBytes int `json:"received-ack-bytes"`
ReceivedDuplicateAcks int `json:"received-duplicate-acks"`
ReceivedUDPTunneledPkts int `json:"received-udp-tunneled-pkts"`
ReceivedBadUDPTunneledPkts int `json:"received-bad-udp-tunneled-pkts"`
ReceivedAcksForUnsentData int `json:"received-acks-for-unsent-data"`
ReceivedInSequencePackets int `json:"received-in-sequence-packets"`
ReceivedInSequenceBytes int `json:"received-in-sequence-bytes"`
ReceivedCompletelyDuplicatePackets int `json:"received-completely-duplicate-packets"`
ReceivedCompletelyDuplicateBytes int `json:"received-completely-duplicate-bytes"`
ReceivedOldDuplicatePackets int `json:"received-old-duplicate-packets"`
ReceivedSomeDuplicatePackets int `json:"received-some-duplicate-packets"`
ReceivedSomeDuplicateBytes int `json:"received-some-duplicate-bytes"`
ReceivedOutOfOrder int `json:"received-out-of-order"`
ReceivedOutOfOrderBytes int `json:"received-out-of-order-bytes"`
ReceivedAfterWindowPackets int `json:"received-after-window-packets"`
ReceivedAfterWindowBytes int `json:"received-after-window-bytes"`
ReceivedWindowProbes int `json:"received-window-probes"`
ReceiveWindowUpdatePackets int `json:"receive-window-update-packets"`
ReceivedAfterClosePackets int `json:"received-after-close-packets"`
DiscardBadChecksum int `json:"discard-bad-checksum"`
DiscardBadHeaderOffset int `json:"discard-bad-header-offset"`
DiscardTooShort int `json:"discard-too-short"`
DiscardReassemblyQueueFull int `json:"discard-reassembly-queue-full"`
ConnectionRequests int `json:"connection-requests"`
ConnectionsAccepts int `json:"connections-accepts"`
BadConnectionAttempts int `json:"bad-connection-attempts"`
ListenQueueOverflows int `json:"listen-queue-overflows"`
IgnoredInWindowResets int `json:"ignored-in-window-resets"`
ConnectionsEstablished int `json:"connections-established"`
ConnectionsHostcacheRtt int `json:"connections-hostcache-rtt"`
ConnectionsHostcacheRttvar int `json:"connections-hostcache-rttvar"`
ConnectionsHostcacheSsthresh int `json:"connections-hostcache-ssthresh"`
ConnectionsClosed int `json:"connections-closed"`
ConnectionDrops int `json:"connection-drops"`
ConnectionsUpdatedRttOnClose int `json:"connections-updated-rtt-on-close"`
ConnectionsUpdatedVarianceOnClose int `json:"connections-updated-variance-on-close"`
ConnectionsUpdatedSsthreshOnClose int `json:"connections-updated-ssthresh-on-close"`
EmbryonicConnectionsDropped int `json:"embryonic-connections-dropped"`
SegmentsUpdatedRtt int `json:"segments-updated-rtt"`
SegmentUpdateAttempts int `json:"segment-update-attempts"`
RetransmitTimeouts int `json:"retransmit-timeouts"`
ConnectionsDroppedByRetransmitTimeout int `json:"connections-dropped-by-retransmit-timeout"`
PersistTimeout int `json:"persist-timeout"`
ConnectionsDroppedByPersistTimeout int `json:"connections-dropped-by-persist-timeout"`
ConnectionsDroppedByFinwait2Timeout int `json:"connections-dropped-by-finwait2-timeout"`
KeepaliveTimeout int `json:"keepalive-timeout"`
KeepaliveProbes int `json:"keepalive-probes"`
ConnectionsDroppedByKeepalives int `json:"connections-dropped-by-keepalives"`
AckHeaderPredictions int `json:"ack-header-predictions"`
DataPacketHeaderPredictions int `json:"data-packet-header-predictions"`
Syncache struct {
EntriesAdded int `json:"entries-added"`
Retransmitted int `json:"retransmitted"`
Duplicates int `json:"duplicates"`
Dropped int `json:"dropped"`
Completed int `json:"completed"`
BucketOverflow int `json:"bucket-overflow"`
CacheOverflow int `json:"cache-overflow"`
Reset int `json:"reset"`
Stale int `json:"stale"`
Aborted int `json:"aborted"`
BadAck int `json:"bad-ack"`
Unreachable int `json:"unreachable"`
ZoneFailures int `json:"zone-failures"`
SentCookies int `json:"sent-cookies"`
ReceivdCookies int `json:"receivd-cookies"`
} `json:"syncache"`
Hostcache struct {
EntriesAdded int `json:"entries-added"`
BufferOverflows int `json:"buffer-overflows"`
} `json:"hostcache"`
Sack struct {
RecoveryEpisodes int `json:"recovery-episodes"`
SegmentRetransmits int `json:"segment-retransmits"`
ByteRetransmits int `json:"byte-retransmits"`
ReceivedBlocks int `json:"received-blocks"`
SentOptionBlocks int `json:"sent-option-blocks"`
ScoreboardOverflows int `json:"scoreboard-overflows"`
} `json:"sack"`
Ecn struct {
CePackets int `json:"ce-packets"`
Ect0Packets int `json:"ect0-packets"`
Ect1Packets int `json:"ect1-packets"`
Handshakes int `json:"handshakes"`
CongestionReductions int `json:"congestion-reductions"`
} `json:"ecn"`
TCPSignature struct {
ReceivedGoodSignature int `json:"received-good-signature"`
ReceivedBadSignature int `json:"received-bad-signature"`
FailedMakeSignature int `json:"failed-make-signature"`
NoSignatureExpected int `json:"no-signature-expected"`
NoSignatureProvided int `json:"no-signature-provided"`
} `json:"tcp-signature"`
Pmtud struct {
PmtudActivated int `json:"pmtud-activated"`
PmtudActivatedMinMss int `json:"pmtud-activated-min-mss"`
PmtudFailed int `json:"pmtud-failed"`
} `json:"pmtud"`
Tw struct {
TwResponds int `json:"tw_responds"`
TwRecycles int `json:"tw_recycles"`
TwResets int `json:"tw_resets"`
} `json:"tw"`
TCPConnectionCountByState struct {
Closed int `json:"CLOSED"`
Listen int `json:"LISTEN"`
SynSent int `json:"SYN_SENT"`
SynRcvd int `json:"SYN_RCVD"`
Established int `json:"ESTABLISHED"`
CloseWait int `json:"CLOSE_WAIT"`
FinWait1 int `json:"FIN_WAIT_1"`
Closing int `json:"CLOSING"`
LastAck int `json:"LAST_ACK"`
FinWait2 int `json:"FIN_WAIT_2"`
TimeWait int `json:"TIME_WAIT"`
} `json:"TCP connection count by state"`
} `json:"tcp"`
UDP struct {
ReceivedDatagrams int `json:"received-datagrams"`
DroppedIncompleteHeaders int `json:"dropped-incomplete-headers"`
DroppedBadDataLength int `json:"dropped-bad-data-length"`
DroppedBadChecksum int `json:"dropped-bad-checksum"`
DroppedNoChecksum int `json:"dropped-no-checksum"`
DroppedNoSocket int `json:"dropped-no-socket"`
DroppedBroadcastMulticast int `json:"dropped-broadcast-multicast"`
DroppedFullSocketBuffer int `json:"dropped-full-socket-buffer"`
NotForHashedPcb int `json:"not-for-hashed-pcb"`
DeliveredPackets int `json:"delivered-packets"`
OutputPackets int `json:"output-packets"`
MulticastSourceFilterMatches int `json:"multicast-source-filter-matches"`
} `json:"udp"`
IP struct {
ReceivedPackets int `json:"received-packets"`
DroppedBadChecksum int `json:"dropped-bad-checksum"`
DroppedBelowMinimumSize int `json:"dropped-below-minimum-size"`
DroppedShortPackets int `json:"dropped-short-packets"`
DroppedTooLong int `json:"dropped-too-long"`
DroppedShortHeaderLength int `json:"dropped-short-header-length"`
DroppedShortData int `json:"dropped-short-data"`
DroppedBadOptions int `json:"dropped-bad-options"`
DroppedBadVersion int `json:"dropped-bad-version"`
ReceivedFragments int `json:"received-fragments"`
DroppedFragments int `json:"dropped-fragments"`
DroppedFragmentsAfterTimeout int `json:"dropped-fragments-after-timeout"`
ReassembledPackets int `json:"reassembled-packets"`
ReceivedLocalPackets int `json:"received-local-packets"`
DroppedUnknownProtocol int `json:"dropped-unknown-protocol"`
ForwardedPackets int `json:"forwarded-packets"`
FastForwardedPackets int `json:"fast-forwarded-packets"`
PacketsCannotForward int `json:"packets-cannot-forward"`
ReceivedUnknownMulticastGroup int `json:"received-unknown-multicast-group"`
RedirectsSent int `json:"redirects-sent"`
SentPackets int `json:"sent-packets"`
SendPacketsFabricatedHeader int `json:"send-packets-fabricated-header"`
DiscardNoMbufs int `json:"discard-no-mbufs"`
DiscardNoRoute int `json:"discard-no-route"`
SentFragments int `json:"sent-fragments"`
FragmentsCreated int `json:"fragments-created"`
DiscardCannotFragment int `json:"discard-cannot-fragment"`
DiscardTunnelNoGif int `json:"discard-tunnel-no-gif"`
DiscardBadAddress int `json:"discard-bad-address"`
} `json:"ip"`
Icmp struct {
IcmpCalls int `json:"icmp-calls"`
ErrorsNotFromMessage int `json:"errors-not-from-message"`
OutputHistogram []struct {
Name string `json:"name"`
Count int `json:"count"`
} `json:"output-histogram"`
DroppedBadCode int `json:"dropped-bad-code"`
DroppedTooShort int `json:"dropped-too-short"`
DroppedBadChecksum int `json:"dropped-bad-checksum"`
DroppedBadLength int `json:"dropped-bad-length"`
DroppedMulticastEcho int `json:"dropped-multicast-echo"`
DroppedMulticastTimestamp int `json:"dropped-multicast-timestamp"`
InputHistogram []struct {
Name string `json:"name"`
Count int `json:"count"`
} `json:"input-histogram"`
SentPackets int `json:"sent-packets"`
DiscardInvalidReturnAddress int `json:"discard-invalid-return-address"`
DiscardNoRoute int `json:"discard-no-route"`
IcmpAddressResponses string `json:"icmp-address-responses"`
} `json:"icmp"`
Carp struct {
ReceivedInetPackets int `json:"received-inet-packets"`
ReceivedInet6Packets int `json:"received-inet6-packets"`
DroppedWrongTTL int `json:"dropped-wrong-ttl"`
DroppedShortHeader int `json:"dropped-short-header"`
DroppedBadChecksum int `json:"dropped-bad-checksum"`
DroppedBadVersion int `json:"dropped-bad-version"`
DroppedShortPacket int `json:"dropped-short-packet"`
DroppedBadAuthentication int `json:"dropped-bad-authentication"`
DroppedBadVhid int `json:"dropped-bad-vhid"`
DroppedBadAddressList int `json:"dropped-bad-address-list"`
SentInetPackets int `json:"sent-inet-packets"`
SentInet6Packets int `json:"sent-inet6-packets"`
SendFailedMemoryError int `json:"send-failed-memory-error"`
} `json:"carp"`
Pfsync struct {
ReceivedInetPackets int `json:"received-inet-packets"`
ReceivedInet6Packets int `json:"received-inet6-packets"`
InputHistogram []struct {
Name string `json:"name"`
Count int `json:"count"`
} `json:"input-histogram"`
DroppedBadInterface int `json:"dropped-bad-interface"`
DroppedBadTTL int `json:"dropped-bad-ttl"`
DroppedShortHeader int `json:"dropped-short-header"`
DroppedBadVersion int `json:"dropped-bad-version"`
DroppedBadAuth int `json:"dropped-bad-auth"`
DroppedBadAction int `json:"dropped-bad-action"`
DroppedShort int `json:"dropped-short"`
DroppedBadValues int `json:"dropped-bad-values"`
DroppedStaleState int `json:"dropped-stale-state"`
DroppedFailedLookup int `json:"dropped-failed-lookup"`
SentInetPackets int `json:"sent-inet-packets"`
SendInet6Packets int `json:"send-inet6-packets"`
OutputHistogram []struct {
Name string `json:"name"`
Count int `json:"count"`
} `json:"output-histogram"`
DiscardedNoMemory int `json:"discarded-no-memory"`
SendErrors int `json:"send-errors"`
} `json:"pfsync"`
Arp struct {
SentRequests int `json:"sent-requests"`
SentFailures int `json:"sent-failures"`
SentReplies int `json:"sent-replies"`
ReceivedRequests int `json:"received-requests"`
ReceivedReplies int `json:"received-replies"`
ReceivedPackets int `json:"received-packets"`
DroppedNoEntry int `json:"dropped-no-entry"`
EntriesTimeout int `json:"entries-timeout"`
DroppedDuplicateAddress int `json:"dropped-duplicate-address"`
} `json:"arp"`
} `json:"statistics"`
}
type ProtocolStatistics struct {
TCPConnectionCountByState map[string]int
}
func (c *Client) FetchProtocolStatistics() (ProtocolStatistics, *APICallError) {
var (
resp protocolStatisticsResponse
data ProtocolStatistics
)
url, ok := c.endpoints["protocolStatistics"]
if !ok {
return data, &APICallError{
Endpoint: "protocolStatistics",
StatusCode: 404,
Message: "endpoint not found in client endpoints",
}
}
if err := c.do("GET", url, nil, &resp); err != nil {
return data, err
}
return data, nil
}

72
opnsense/services.go Normal file
View file

@ -0,0 +1,72 @@
package opnsense
type servicesSearchResponse struct {
Total int `json:"total"`
RowCount int `json:"rowCount"`
Current int `json:"current"`
Rows []struct {
ID string `json:"id"`
Locked int `json:"locked"`
Running int `json:"running"`
Description string `json:"description"`
Name string `json:"name"`
} `json:"rows"`
}
type ServiceStatus int
const (
ServiceStatusStopped ServiceStatus = iota
ServiceStatusRunning
ServiceStatusUnknown
)
type Service struct {
Status ServiceStatus
Description string
Name string
}
type Services struct {
TotalRunning int
TotalStopped int
Services []Service
}
func (c *Client) FetchServices() (Services, *APICallError) {
var resp servicesSearchResponse
var services Services
url, ok := c.endpoints["services"]
if !ok {
return services, &APICallError{
Endpoint: "services",
Message: "endpoint not found",
StatusCode: 0,
}
}
err := c.do("GET", url, nil, &resp)
if err != nil {
return services, err
}
for _, service := range resp.Rows {
switch service.Running {
case 0:
services.TotalStopped++
case 1:
services.TotalRunning++
}
s := Service{
Status: ServiceStatus(service.Running),
Description: service.Description,
Name: service.Name,
}
services.Services = append(services.Services, s)
}
return services, nil
}

136
opnsense/system.go Normal file
View file

@ -0,0 +1,136 @@
package opnsense
type systemInfoResponse struct {
System string `json:"system"`
Plugins []string `json:"plugins"`
Data struct {
Interfaces []struct {
Inpkts string `json:"inpkts"`
Outpkts string `json:"outpkts"`
Inbytes string `json:"inbytes"`
Outbytes string `json:"outbytes"`
InbytesFrmt string `json:"inbytes_frmt"`
OutbytesFrmt string `json:"outbytes_frmt"`
Inerrs string `json:"inerrs"`
Outerrs string `json:"outerrs"`
Collisions string `json:"collisions"`
Descr string `json:"descr"`
Name string `json:"name"`
Status string `json:"status"`
Ipaddr string `json:"ipaddr"`
Media string `json:"media"`
} `json:"interfaces"`
System struct {
Versions []string `json:"versions"`
CPU struct {
Used string `json:"used"`
User string `json:"user"`
Nice string `json:"nice"`
Sys string `json:"sys"`
Intr string `json:"intr"`
Idle string `json:"idle"`
Model string `json:"model"`
Cpus string `json:"cpus"`
Cores string `json:"cores"`
MaxFreq string `json:"max.freq"`
CurFreq string `json:"cur.freq"`
FreqTranslate string `json:"freq_translate"`
Load []string `json:"load"`
} `json:"cpu"`
DateFrmt string `json:"date_frmt"`
DateTime string `json:"date_time"`
Uptime string `json:"uptime"`
Config struct {
LastChange string `json:"last_change"`
LastChangeFrmt string `json:"last_change_frmt"`
} `json:"config"`
Kernel struct {
Pf struct {
Maxstates string `json:"maxstates"`
States string `json:"states"`
} `json:"pf"`
Mbuf struct {
Total string `json:"total"`
Max string `json:"max"`
} `json:"mbuf"`
Memory struct {
Total string `json:"total"`
Used string `json:"used"`
Arc string `json:"arc"`
ArcTxt string `json:"arc_txt"`
} `json:"memory"`
} `json:"kernel"`
Disk struct {
Swap []struct {
Device string `json:"device"`
Total string `json:"total"`
Used string `json:"used"`
} `json:"swap"`
Devices []struct {
Device string `json:"device"`
Type string `json:"type"`
Size string `json:"size"`
Used string `json:"used"`
Available string `json:"available"`
Capacity string `json:"capacity"`
Mountpoint string `json:"mountpoint"`
} `json:"devices"`
} `json:"disk"`
Firmware string `json:"firmware"`
} `json:"system"`
Temperature []struct {
Device string `json:"device"`
DeviceSeq string `json:"device_seq"`
Temperature string `json:"temperature"`
Type string `json:"type"`
TypeTranslated string `json:"type_translated"`
} `json:"temperature"`
} `json:"data"`
}
type Temperature struct {
Device string
DeviceSeq string
Type string
TemperatureCelsuis int
TemperatureFahrenheit float32
}
type SystemInfo struct {
Temperature []Temperature
}
func (c *Client) FetchSystemInfo() (SystemInfo, *APICallError) {
var resp systemInfoResponse
var data SystemInfo
url, ok := c.endpoints["systemInfo"]
if !ok {
return data, &APICallError{
Endpoint: "system_info",
Message: "endpoint not found",
StatusCode: 0,
}
}
if err := c.do("GET", url, nil, &resp); err != nil {
return data, err
}
for _, v := range resp.Data.Temperature {
celsius, err := parseStringToInt(v.Temperature, url)
if err != nil {
return data, err
}
data.Temperature = append(data.Temperature, Temperature{
Device: v.Device,
DeviceSeq: v.DeviceSeq,
Type: v.Type,
TemperatureCelsuis: celsius,
TemperatureFahrenheit: (float32(celsius) * 1.8) + 32,
})
}
return data, nil
}

244
opnsense/unbound_dns.go Normal file
View file

@ -0,0 +1,244 @@
package opnsense
import (
"fmt"
"strconv"
)
type unboundDNSStatusResponse struct {
Status string `json:"status"`
Data struct {
Total struct {
Num struct {
Queries string `json:"queries"`
QueriesIPRatelimited string `json:"queries_ip_ratelimited"`
QueriesCookieValid string `json:"queries_cookie_valid"`
QueriesCookieClient string `json:"queries_cookie_client"`
QueriesCookieInvalid string `json:"queries_cookie_invalid"`
Cachehits string `json:"cachehits"`
Cachemiss string `json:"cachemiss"`
Prefetch string `json:"prefetch"`
QueriesTimedOut string `json:"queries_timed_out"`
Expired string `json:"expired"`
Recursivereplies string `json:"recursivereplies"`
Dnscrypt struct {
Crypted string `json:"crypted"`
Cert string `json:"cert"`
Cleartext string `json:"cleartext"`
Malformed string `json:"malformed"`
} `json:"dnscrypt"`
} `json:"num"`
Query struct {
QueueTimeUs struct {
Max string `json:"max"`
} `json:"queue_time_us"`
} `json:"query"`
Requestlist struct {
Avg string `json:"avg"`
Max string `json:"max"`
Overwritten string `json:"overwritten"`
Exceeded string `json:"exceeded"`
Current struct {
All string `json:"all"`
User string `json:"user"`
} `json:"current"`
} `json:"requestlist"`
Recursion struct {
Time struct {
Avg string `json:"avg"`
Median string `json:"median"`
} `json:"time"`
} `json:"recursion"`
Tcpusage string `json:"tcpusage"`
} `json:"total"`
Time struct {
Now string `json:"now"`
Up string `json:"up"`
Elapsed string `json:"elapsed"`
} `json:"time"`
Mem struct {
Cache struct {
Rrset string `json:"rrset"`
Message string `json:"message"`
DnscryptSharedSecret string `json:"dnscrypt_shared_secret"`
DnscryptNonce string `json:"dnscrypt_nonce"`
} `json:"cache"`
Mod struct {
Iterator string `json:"iterator"`
Validator string `json:"validator"`
Respip string `json:"respip"`
Dynlibmod string `json:"dynlibmod"`
} `json:"mod"`
Streamwait string `json:"streamwait"`
HTTP struct {
QueryBuffer string `json:"query_buffer"`
ResponseBuffer string `json:"response_buffer"`
} `json:"http"`
} `json:"mem"`
Num struct {
Query struct {
Type struct {
A string `json:"A"`
Soa string `json:"SOA"`
Ptr string `json:"PTR"`
Mx string `json:"MX"`
Txt string `json:"TXT"`
Aaaa string `json:"AAAA"`
Srv string `json:"SRV"`
Svcb string `json:"SVCB"`
HTTPS string `json:"HTTPS"`
} `json:"type"`
Class struct {
In string `json:"IN"`
} `json:"class"`
Opcode struct {
Query string `json:"QUERY"`
} `json:"opcode"`
TCP string `json:"tcp"`
Tcpout string `json:"tcpout"`
Udpout string `json:"udpout"`
TLS struct {
Value string `json:"__value__"`
Resume string `json:"resume"`
} `json:"tls"`
Ipv6 string `json:"ipv6"`
HTTPS string `json:"https"`
Flags struct {
Qr string `json:"QR"`
Aa string `json:"AA"`
Tc string `json:"TC"`
Rd string `json:"RD"`
Ra string `json:"RA"`
Z string `json:"Z"`
Ad string `json:"AD"`
Cd string `json:"CD"`
} `json:"flags"`
Edns struct {
Present string `json:"present"`
Do string `json:"DO"`
} `json:"edns"`
Ratelimited string `json:"ratelimited"`
Aggressive struct {
Noerror string `json:"NOERROR"`
Nxdomain string `json:"NXDOMAIN"`
} `json:"aggressive"`
Dnscrypt struct {
SharedSecret struct {
Cachemiss string `json:"cachemiss"`
} `json:"shared_secret"`
Replay string `json:"replay"`
} `json:"dnscrypt"`
Authzone struct {
Up string `json:"up"`
Down string `json:"down"`
} `json:"authzone"`
} `json:"query"`
Answer struct {
Rcode struct {
Noerror string `json:"NOERROR"`
Formerr string `json:"FORMERR"`
Servfail string `json:"SERVFAIL"`
Nxdomain string `json:"NXDOMAIN"`
Notimpl string `json:"NOTIMPL"`
Refused string `json:"REFUSED"`
Nodata string `json:"nodata"`
} `json:"rcode"`
Secure string `json:"secure"`
Bogus string `json:"bogus"`
} `json:"answer"`
Rrset struct {
Bogus string `json:"bogus"`
} `json:"rrset"`
} `json:"num"`
Unwanted struct {
Queries string `json:"queries"`
Replies string `json:"replies"`
} `json:"unwanted"`
Msg struct {
Cache struct {
Count string `json:"count"`
MaxCollisions string `json:"max_collisions"`
} `json:"cache"`
} `json:"msg"`
Rrset struct {
Cache struct {
Count string `json:"count"`
MaxCollisions string `json:"max_collisions"`
} `json:"cache"`
} `json:"rrset"`
Infra struct {
Cache struct {
Count string `json:"count"`
} `json:"cache"`
} `json:"infra"`
Key struct {
Cache struct {
Count string `json:"count"`
} `json:"cache"`
} `json:"key"`
DnscryptSharedSecret struct {
Cache struct {
Count string `json:"count"`
} `json:"cache"`
} `json:"dnscrypt_shared_secret"`
DnscryptNonce struct {
Cache struct {
Count string `json:"count"`
} `json:"cache"`
} `json:"dnscrypt_nonce"`
} `json:"data"`
}
type UnboundDNSOverview struct {
Total int
BlocklistSize int
Passed int
UptimeSeconds float64
AnswerRcodes map[string]int
AnswerRcodesTotal int
AnnswerBogusTotal int
AnswerSecureTotal int
QueryTypes map[string]int
}
func (c *Client) FetchUnboundOverview() (UnboundDNSOverview, *APICallError) {
var (
response unboundDNSStatusResponse
data UnboundDNSOverview
err error
errConvertion *APICallError
)
url, ok := c.endpoints["unboundDNSStatus"]
if !ok {
return data, &APICallError{
Endpoint: "unboundDNSStatus",
Message: "endpoint not found in client endpoints",
StatusCode: 0,
}
}
if err := c.do("GET", url, nil, &response); err != nil {
return data, err
}
data.QueryTypes = make(map[string]int)
data.AnswerRcodes = make(map[string]int)
data.UptimeSeconds, err = strconv.ParseFloat(response.Data.Time.Up, 64)
if err != nil {
return data, &APICallError{
Endpoint: string(url),
Message: fmt.Sprintf("unable to parse uptime %s", err),
StatusCode: 0,
}
}
data.AnnswerBogusTotal, errConvertion = parseStringToInt(response.Data.Num.Answer.Bogus, url)
if err != nil {
return data, errConvertion
}
data.AnswerSecureTotal, errConvertion = parseStringToInt(response.Data.Num.Answer.Secure, url)
if err != nil {
return data, errConvertion
}
return data, nil
}

51
opnsense/utils.go Normal file
View file

@ -0,0 +1,51 @@
package opnsense
import (
"fmt"
"regexp"
"strconv"
"strings"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
)
// parseStringToInt parses a string value to an int value.
// The endpoint is used to identify the EndpointPath that the caller used.
// so we can propagate in the *APICallError.
func parseStringToInt(value string, endpoint EndpointPath) (int, *APICallError) {
intValue, err := strconv.Atoi(value)
if err != nil {
return 0, &APICallError{
Endpoint: string(endpoint),
Message: fmt.Sprintf("error parsing %s to int: %s", value, err.Error()),
StatusCode: 0,
}
}
return intValue, nil
}
// parseStringToFloatWithReplace parses a string value to a float64 value.
// The replace pattern is used to remove any characters that are not part of the float64 value.
// The regex is first used to check if the value matches the regex format.
func parseStringToFloatWithReplace(value string, regex *regexp.Regexp, replacePattern string, valueTypeName string, logger log.Logger) float64 {
if regex.MatchString(value) {
cleanValue := strings.Replace(value, replacePattern, "", -1)
parsedValue, err := strconv.ParseFloat(cleanValue, 64)
if err != nil {
level.Warn(logger).
Log(
"msg", fmt.Sprintf("parsing %s: '%s' to float64 failed", valueTypeName, value),
"err", err,
)
return -1.0
}
return parsedValue
}
level.Warn(logger).
Log(
"msg", fmt.Sprintf("parsing %s: '%s' to float64 failed. Pattern matching failed.", valueTypeName, value),
)
return -1.0
}

70
opnsense/utils_test.go Normal file
View file

@ -0,0 +1,70 @@
package opnsense
import (
"regexp"
"testing"
"github.com/go-kit/log"
)
func TestParsePercentage(t *testing.T) {
logger := log.NewNopLogger()
testRegex := regexp.MustCompile(`\d\.\d %`)
tests := []struct {
name string
value string
regex *regexp.Regexp
replacePattern string
valueTypeName string
gatewayName string
expected float64
}{
{
name: "Valid percentage with space",
value: "50.5 %",
regex: testRegex,
replacePattern: " %",
valueTypeName: "loss",
gatewayName: "Gateway1",
expected: 50.5,
},
{
name: "Valid percentage with space",
value: "5.5 %",
regex: testRegex,
replacePattern: " %",
valueTypeName: "loss",
gatewayName: "Gateway1",
expected: 5.5,
},
{
name: "Invalid percentage format",
value: "invalid %",
regex: testRegex,
replacePattern: " %",
valueTypeName: "loss",
gatewayName: "Gateway1",
expected: -1.0,
},
{
name: "Invalid regex match (no space)",
value: "50.5%",
regex: testRegex,
replacePattern: " %",
valueTypeName: "loss",
gatewayName: "Gateway1",
expected: -1,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := parseStringToFloatWithReplace(tc.value, tc.regex, tc.replacePattern, tc.valueTypeName, logger)
if result != tc.expected {
t.Errorf("parsePercentage(%s, %v, %s, %s, logger, %s) = %v; want %v",
tc.value, tc.regex, tc.replacePattern, tc.valueTypeName, tc.gatewayName, result, tc.expected)
}
})
}
}

14
vendor/github.com/alecthomas/kingpin/v2/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,14 @@
sudo: false
language: go
install: go get -t -v ./...
go:
- 1.2.x
- 1.3.x
- 1.4.x
- 1.5.x
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x

19
vendor/github.com/alecthomas/kingpin/v2/COPYING generated vendored Normal file
View file

@ -0,0 +1,19 @@
Copyright (C) 2014 Alec Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

709
vendor/github.com/alecthomas/kingpin/v2/README.md generated vendored Normal file
View file

@ -0,0 +1,709 @@
# CONTRIBUTIONS ONLY
**What does this mean?** I do not have time to fix issues myself. The only way fixes or new features will be added is by people submitting PRs. If you are interested in taking over maintenance and have a history of contributions to Kingpin, please let me know.
**Current status.** Kingpin is largely feature stable. There hasn't been a need to add new features in a while, but there are some bugs that should be fixed.
**Why?** I no longer use Kingpin personally (I now use [kong](https://github.com/alecthomas/kong)). Rather than leave the project in a limbo of people filing issues and wondering why they're not being worked on, I believe this notice will more clearly set expectations.
# Kingpin - A Go (golang) command line and flag parser
[![](https://godoc.org/github.com/alecthomas/kingpin?status.svg)](http://godoc.org/github.com/alecthomas/kingpin) [![CI](https://github.com/alecthomas/kingpin/actions/workflows/ci.yml/badge.svg)](https://github.com/alecthomas/kingpin/actions/workflows/ci.yml)
<!-- MarkdownTOC -->
- [Overview](#overview)
- [Features](#features)
- [User-visible changes between v1 and v2](#user-visible-changes-between-v1-and-v2)
- [Flags can be used at any point after their definition.](#flags-can-be-used-at-any-point-after-their-definition)
- [Short flags can be combined with their parameters](#short-flags-can-be-combined-with-their-parameters)
- [API changes between v1 and v2](#api-changes-between-v1-and-v2)
- [Versions](#versions)
- [V2 is the current stable version](#v2-is-the-current-stable-version)
- [V1 is the OLD stable version](#v1-is-the-old-stable-version)
- [Change History](#change-history)
- [Examples](#examples)
- [Simple Example](#simple-example)
- [Complex Example](#complex-example)
- [Reference Documentation](#reference-documentation)
- [Displaying errors and usage information](#displaying-errors-and-usage-information)
- [Sub-commands](#sub-commands)
- [Custom Parsers](#custom-parsers)
- [Repeatable flags](#repeatable-flags)
- [Boolean Values](#boolean-values)
- [Default Values](#default-values)
- [Place-holders in Help](#place-holders-in-help)
- [Consuming all remaining arguments](#consuming-all-remaining-arguments)
- [Bash/ZSH Shell Completion](#bashzsh-shell-completion)
- [Supporting -h for help](#supporting--h-for-help)
- [Custom help](#custom-help)
<!-- /MarkdownTOC -->
## Overview
Kingpin is a [fluent-style](http://en.wikipedia.org/wiki/Fluent_interface),
type-safe command-line parser. It supports flags, nested commands, and
positional arguments.
Install it with:
$ go get github.com/alecthomas/kingpin/v2
It looks like this:
```go
var (
verbose = kingpin.Flag("verbose", "Verbose mode.").Short('v').Bool()
name = kingpin.Arg("name", "Name of user.").Required().String()
)
func main() {
kingpin.Parse()
fmt.Printf("%v, %s\n", *verbose, *name)
}
```
More [examples](https://github.com/alecthomas/kingpin/tree/master/_examples) are available.
Second to parsing, providing the user with useful help is probably the most
important thing a command-line parser does. Kingpin tries to provide detailed
contextual help if `--help` is encountered at any point in the command line
(excluding after `--`).
## Features
- Help output that isn't as ugly as sin.
- Fully [customisable help](#custom-help), via Go templates.
- Parsed, type-safe flags (`kingpin.Flag("f", "help").Int()`)
- Parsed, type-safe positional arguments (`kingpin.Arg("a", "help").Int()`).
- Parsed, type-safe, arbitrarily deep commands (`kingpin.Command("c", "help")`).
- Support for required flags and required positional arguments (`kingpin.Flag("f", "").Required().Int()`).
- Support for arbitrarily nested default commands (`command.Default()`).
- Callbacks per command, flag and argument (`kingpin.Command("c", "").Action(myAction)`).
- POSIX-style short flag combining (`-a -b` -> `-ab`).
- Short-flag+parameter combining (`-a parm` -> `-aparm`).
- Read command-line from files (`@<file>`).
- Automatically generate man pages (`--help-man`).
## User-visible changes between v1 and v2
### Flags can be used at any point after their definition.
Flags can be specified at any point after their definition, not just
*immediately after their associated command*. From the chat example below, the
following used to be required:
```
$ chat --server=chat.server.com:8080 post --image=~/Downloads/owls.jpg pics
```
But the following will now work:
```
$ chat post --server=chat.server.com:8080 --image=~/Downloads/owls.jpg pics
```
### Short flags can be combined with their parameters
Previously, if a short flag was used, any argument to that flag would have to
be separated by a space. That is no longer the case.
## API changes between v1 and v2
- `ParseWithFileExpansion()` is gone. The new parser directly supports expanding `@<file>`.
- Added `FatalUsage()` and `FatalUsageContext()` for displaying an error + usage and terminating.
- `Dispatch()` renamed to `Action()`.
- Added `ParseContext()` for parsing a command line into its intermediate context form without executing.
- Added `Terminate()` function to override the termination function.
- Added `UsageForContextWithTemplate()` for printing usage via a custom template.
- Added `UsageTemplate()` for overriding the default template to use. Two templates are included:
1. `DefaultUsageTemplate` - default template.
2. `CompactUsageTemplate` - compact command template for larger applications.
## Versions
The current stable version is [github.com/alecthomas/kingpin/v2](https://github.com/alecthomas/kingpin/v2). The previous version, [gopkg.in/alecthomas/kingpin.v1](https://gopkg.in/alecthomas/kingpin.v1), is deprecated and in maintenance mode.
### [V2](https://github.com/alecthomas/kingpin/v2) is the current stable version
Installation:
```sh
$ go get github.com/alecthomas/kingpin/v2
```
### [V1](https://gopkg.in/alecthomas/kingpin.v1) is the OLD stable version
Installation:
```sh
$ go get gopkg.in/alecthomas/kingpin.v1
```
## Change History
- *2015-09-19* -- Stable v2.1.0 release.
- Added `command.Default()` to specify a default command to use if no other
command matches. This allows for convenient user shortcuts.
- Exposed `HelpFlag` and `VersionFlag` for further customisation.
- `Action()` and `PreAction()` added and both now support an arbitrary
number of callbacks.
- `kingpin.SeparateOptionalFlagsUsageTemplate`.
- `--help-long` and `--help-man` (hidden by default) flags.
- Flags are "interspersed" by default, but can be disabled with `app.Interspersed(false)`.
- Added flags for all simple builtin types (int8, uint16, etc.) and slice variants.
- Use `app.Writer(os.Writer)` to specify the default writer for all output functions.
- Dropped `os.Writer` prefix from all printf-like functions.
- *2015-05-22* -- Stable v2.0.0 release.
- Initial stable release of v2.0.0.
- Fully supports interspersed flags, commands and arguments.
- Flags can be present at any point after their logical definition.
- Application.Parse() terminates if commands are present and a command is not parsed.
- Dispatch() -> Action().
- Actions are dispatched after all values are populated.
- Override termination function (defaults to os.Exit).
- Override output stream (defaults to os.Stderr).
- Templatised usage help, with default and compact templates.
- Make error/usage functions more consistent.
- Support argument expansion from files by default (with @<file>).
- Fully public data model is available via .Model().
- Parser has been completely refactored.
- Parsing and execution has been split into distinct stages.
- Use `go generate` to generate repeated flags.
- Support combined short-flag+argument: -fARG.
- *2015-01-23* -- Stable v1.3.4 release.
- Support "--" for separating flags from positional arguments.
- Support loading flags from files (ParseWithFileExpansion()). Use @FILE as an argument.
- Add post-app and post-cmd validation hooks. This allows arbitrary validation to be added.
- A bunch of improvements to help usage and formatting.
- Support arbitrarily nested sub-commands.
- *2014-07-08* -- Stable v1.2.0 release.
- Pass any value through to `Strings()` when final argument.
Allows for values that look like flags to be processed.
- Allow `--help` to be used with commands.
- Support `Hidden()` flags.
- Parser for [units.Base2Bytes](https://github.com/alecthomas/units)
type. Allows for flags like `--ram=512MB` or `--ram=1GB`.
- Add an `Enum()` value, allowing only one of a set of values
to be selected. eg. `Flag(...).Enum("debug", "info", "warning")`.
- *2014-06-27* -- Stable v1.1.0 release.
- Bug fixes.
- Always return an error (rather than panicing) when misconfigured.
- `OpenFile(flag, perm)` value type added, for finer control over opening files.
- Significantly improved usage formatting.
- *2014-06-19* -- Stable v1.0.0 release.
- Support [cumulative positional](#consuming-all-remaining-arguments) arguments.
- Return error rather than panic when there are fatal errors not caught by
the type system. eg. when a default value is invalid.
- Use gokpg.in.
- *2014-06-10* -- Place-holder streamlining.
- Renamed `MetaVar` to `PlaceHolder`.
- Removed `MetaVarFromDefault`. Kingpin now uses [heuristics](#place-holders-in-help)
to determine what to display.
## Examples
### Simple Example
Kingpin can be used for simple flag+arg applications like so:
```
$ ping --help
usage: ping [<flags>] <ip> [<count>]
Flags:
--debug Enable debug mode.
--help Show help.
-t, --timeout=5s Timeout waiting for ping.
Args:
<ip> IP address to ping.
[<count>] Number of packets to send
$ ping 1.2.3.4 5
Would ping: 1.2.3.4 with timeout 5s and count 5
```
From the following source:
```go
package main
import (
"fmt"
"github.com/alecthomas/kingpin/v2"
)
var (
debug = kingpin.Flag("debug", "Enable debug mode.").Bool()
timeout = kingpin.Flag("timeout", "Timeout waiting for ping.").Default("5s").Envar("PING_TIMEOUT").Short('t').Duration()
ip = kingpin.Arg("ip", "IP address to ping.").Required().IP()
count = kingpin.Arg("count", "Number of packets to send").Int()
)
func main() {
kingpin.Version("0.0.1")
kingpin.Parse()
fmt.Printf("Would ping: %s with timeout %s and count %d\n", *ip, *timeout, *count)
}
```
#### Reading arguments from a file
Kingpin supports reading arguments from a file.
Create a file with the corresponding arguments:
```
echo -t=5\n > args
```
And now supply it:
```
$ ping @args
```
### Complex Example
Kingpin can also produce complex command-line applications with global flags,
subcommands, and per-subcommand flags, like this:
```
$ chat --help
usage: chat [<flags>] <command> [<flags>] [<args> ...]
A command-line chat application.
Flags:
--help Show help.
--debug Enable debug mode.
--server=127.0.0.1 Server address.
Commands:
help [<command>]
Show help for a command.
register <nick> <name>
Register a new user.
post [<flags>] <channel> [<text>]
Post a message to a channel.
$ chat help post
usage: chat [<flags>] post [<flags>] <channel> [<text>]
Post a message to a channel.
Flags:
--image=IMAGE Image to post.
Args:
<channel> Channel to post to.
[<text>] Text to post.
$ chat post --image=~/Downloads/owls.jpg pics
...
```
From this code:
```go
package main
import (
"os"
"strings"
"github.com/alecthomas/kingpin/v2"
)
var (
app = kingpin.New("chat", "A command-line chat application.")
debug = app.Flag("debug", "Enable debug mode.").Bool()
serverIP = app.Flag("server", "Server address.").Default("127.0.0.1").IP()
register = app.Command("register", "Register a new user.")
registerNick = register.Arg("nick", "Nickname for user.").Required().String()
registerName = register.Arg("name", "Name of user.").Required().String()
post = app.Command("post", "Post a message to a channel.")
postImage = post.Flag("image", "Image to post.").File()
postChannel = post.Arg("channel", "Channel to post to.").Required().String()
postText = post.Arg("text", "Text to post.").Strings()
)
func main() {
switch kingpin.MustParse(app.Parse(os.Args[1:])) {
// Register user
case register.FullCommand():
println(*registerNick)
// Post message
case post.FullCommand():
if *postImage != nil {
}
text := strings.Join(*postText, " ")
println("Post:", text)
}
}
```
## Reference Documentation
### Displaying errors and usage information
Kingpin exports a set of functions to provide consistent errors and usage
information to the user.
Error messages look something like this:
<app>: error: <message>
The functions on `Application` are:
Function | Purpose
---------|--------------
`Errorf(format, args)` | Display a printf formatted error to the user.
`Fatalf(format, args)` | As with Errorf, but also call the termination handler.
`FatalUsage(format, args)` | As with Fatalf, but also print contextual usage information.
`FatalUsageContext(context, format, args)` | As with Fatalf, but also print contextual usage information from a `ParseContext`.
`FatalIfError(err, format, args)` | Conditionally print an error prefixed with format+args, then call the termination handler
There are equivalent global functions in the kingpin namespace for the default
`kingpin.CommandLine` instance.
### Sub-commands
Kingpin supports nested sub-commands, with separate flag and positional
arguments per sub-command. Note that positional arguments may only occur after
sub-commands.
For example:
```go
var (
deleteCommand = kingpin.Command("delete", "Delete an object.")
deleteUserCommand = deleteCommand.Command("user", "Delete a user.")
deleteUserUIDFlag = deleteUserCommand.Flag("uid", "Delete user by UID rather than username.")
deleteUserUsername = deleteUserCommand.Arg("username", "Username to delete.")
deletePostCommand = deleteCommand.Command("post", "Delete a post.")
)
func main() {
switch kingpin.Parse() {
case deleteUserCommand.FullCommand():
case deletePostCommand.FullCommand():
}
}
```
### Custom Parsers
Kingpin supports both flag and positional argument parsers for converting to
Go types. For example, some included parsers are `Int()`, `Float()`,
`Duration()` and `ExistingFile()` (see [parsers.go](./parsers.go) for a complete list of included parsers).
Parsers conform to Go's [`flag.Value`](http://godoc.org/flag#Value)
interface, so any existing implementations will work.
For example, a parser for accumulating HTTP header values might look like this:
```go
type HTTPHeaderValue http.Header
func (h *HTTPHeaderValue) Set(value string) error {
parts := strings.SplitN(value, ":", 2)
if len(parts) != 2 {
return fmt.Errorf("expected HEADER:VALUE got '%s'", value)
}
(*http.Header)(h).Add(parts[0], parts[1])
return nil
}
func (h *HTTPHeaderValue) String() string {
return ""
}
```
As a convenience, I would recommend something like this:
```go
func HTTPHeader(s Settings) (target *http.Header) {
target = &http.Header{}
s.SetValue((*HTTPHeaderValue)(target))
return
}
```
You would use it like so:
```go
headers = HTTPHeader(kingpin.Flag("header", "Add a HTTP header to the request.").Short('H'))
```
### Repeatable flags
Depending on the `Value` they hold, some flags may be repeated. The
`IsCumulative() bool` function on `Value` tells if it's safe to call `Set()`
multiple times or if an error should be raised if several values are passed.
The built-in `Value`s returning slices and maps, as well as `Counter` are
examples of `Value`s that make a flag repeatable.
### Boolean values
Boolean values are uniquely managed by Kingpin. Each boolean flag will have a negative complement:
`--<name>` and `--no-<name>`.
### Default Values
The default value is the zero value for a type. This can be overridden with
the `Default(value...)` function on flags and arguments. This function accepts
one or several strings, which are parsed by the value itself, so they *must*
be compliant with the format expected.
### Place-holders in Help
The place-holder value for a flag is the value used in the help to describe
the value of a non-boolean flag.
The value provided to PlaceHolder() is used if provided, then the value
provided by Default() if provided, then finally the capitalised flag name is
used.
Here are some examples of flags with various permutations:
--name=NAME // Flag(...).String()
--name="Harry" // Flag(...).Default("Harry").String()
--name=FULL-NAME // Flag(...).PlaceHolder("FULL-NAME").Default("Harry").String()
### Consuming all remaining arguments
A common command-line idiom is to use all remaining arguments for some
purpose. eg. The following command accepts an arbitrary number of
IP addresses as positional arguments:
./cmd ping 10.1.1.1 192.168.1.1
Such arguments are similar to [repeatable flags](#repeatable-flags), but for
arguments. Therefore they use the same `IsCumulative() bool` function on the
underlying `Value`, so the built-in `Value`s for which the `Set()` function
can be called several times will consume multiple arguments.
To implement the above example with a custom `Value`, we might do something
like this:
```go
type ipList []net.IP
func (i *ipList) Set(value string) error {
if ip := net.ParseIP(value); ip == nil {
return fmt.Errorf("'%s' is not an IP address", value)
} else {
*i = append(*i, ip)
return nil
}
}
func (i *ipList) String() string {
return ""
}
func (i *ipList) IsCumulative() bool {
return true
}
func IPList(s Settings) (target *[]net.IP) {
target = new([]net.IP)
s.SetValue((*ipList)(target))
return
}
```
And use it like so:
```go
ips := IPList(kingpin.Arg("ips", "IP addresses to ping."))
```
### Bash/ZSH Shell Completion
By default, all flags and commands/subcommands generate completions
internally.
Out of the box, CLI tools using kingpin should be able to take advantage
of completion hinting for flags and commands. By specifying
`--completion-bash` as the first argument, your CLI tool will show
possible subcommands. By ending your argv with `--`, hints for flags
will be shown.
To allow your end users to take advantage you must package a
`/etc/bash_completion.d` script with your distribution (or the equivalent
for your target platform/shell). An alternative is to instruct your end
user to source a script from their `bash_profile` (or equivalent).
Fortunately Kingpin makes it easy to generate or source a script for use
with end users shells. `./yourtool --completion-script-bash` and
`./yourtool --completion-script-zsh` will generate these scripts for you.
**Installation by Package**
For the best user experience, you should bundle your pre-created
completion script with your CLI tool and install it inside
`/etc/bash_completion.d` (or equivalent). A good suggestion is to add
this as an automated step to your build pipeline, in the implementation
is improved for bug fixed.
**Installation by `bash_profile`**
Alternatively, instruct your users to add an additional statement to
their `bash_profile` (or equivalent):
```
eval "$(your-cli-tool --completion-script-bash)"
```
Or for ZSH
```
eval "$(your-cli-tool --completion-script-zsh)"
```
#### Additional API
To provide more flexibility, a completion option API has been
exposed for flags to allow user defined completion options, to extend
completions further than just EnumVar/Enum.
**Provide Static Options**
When using an `Enum` or `EnumVar`, users are limited to only the options
given. Maybe we wish to hint possible options to the user, but also
allow them to provide their own custom option. `HintOptions` gives
this functionality to flags.
```
app := kingpin.New("completion", "My application with bash completion.")
app.Flag("port", "Provide a port to connect to").
Required().
HintOptions("80", "443", "8080").
IntVar(&c.port)
```
**Provide Dynamic Options**
Consider the case that you needed to read a local database or a file to
provide suggestions. You can dynamically generate the options
```
func listHosts() []string {
// Provide a dynamic list of hosts from a hosts file or otherwise
// for bash completion. In this example we simply return static slice.
// You could use this functionality to reach into a hosts file to provide
// completion for a list of known hosts.
return []string{"sshhost.example", "webhost.example", "ftphost.example"}
}
app := kingpin.New("completion", "My application with bash completion.")
app.Flag("flag-1", "").HintAction(listHosts).String()
```
**EnumVar/Enum**
When using `Enum` or `EnumVar`, any provided options will be automatically
used for bash autocompletion. However, if you wish to provide a subset or
different options, you can use `HintOptions` or `HintAction` which will override
the default completion options for `Enum`/`EnumVar`.
**Examples**
You can see an in depth example of the completion API within
`examples/completion/main.go`
### Supporting -h for help
`kingpin.CommandLine.HelpFlag.Short('h')`
Short help is also available when creating a more complicated app:
```go
var (
app = kingpin.New("chat", "A command-line chat application.")
// ...
)
func main() {
app.HelpFlag.Short('h')
switch kingpin.MustParse(app.Parse(os.Args[1:])) {
// ...
}
}
```
### Custom help
Kingpin v2 supports templatised help using the text/template library (actually, [a fork](https://github.com/alecthomas/template)).
You can specify the template to use with the [Application.UsageTemplate()](http://godoc.org/github.com/alecthomas/kingpin/v2#Application.UsageTemplate) function.
There are four included templates: `kingpin.DefaultUsageTemplate` is the default,
`kingpin.CompactUsageTemplate` provides a more compact representation for more complex command-line structures,
`kingpin.SeparateOptionalFlagsUsageTemplate` looks like the default template, but splits required
and optional command flags into separate lists, and `kingpin.ManPageTemplate` is used to generate man pages.
See the above templates for examples of usage, and the the function [UsageForContextWithTemplate()](https://github.com/alecthomas/kingpin/blob/master/usage.go#L198) method for details on the context.
#### Default help template
```
$ go run ./examples/curl/curl.go --help
usage: curl [<flags>] <command> [<args> ...]
An example implementation of curl.
Flags:
--help Show help.
-t, --timeout=5s Set connection timeout.
-H, --headers=HEADER=VALUE
Add HTTP headers to the request.
Commands:
help [<command>...]
Show help.
get url <url>
Retrieve a URL.
get file <file>
Retrieve a file.
post [<flags>] <url>
POST a resource.
```
#### Compact help template
```
$ go run ./examples/curl/curl.go --help
usage: curl [<flags>] <command> [<args> ...]
An example implementation of curl.
Flags:
--help Show help.
-t, --timeout=5s Set connection timeout.
-H, --headers=HEADER=VALUE
Add HTTP headers to the request.
Commands:
help [<command>...]
get [<flags>]
url <url>
file <file>
post [<flags>] <url>
```

42
vendor/github.com/alecthomas/kingpin/v2/actions.go generated vendored Normal file
View file

@ -0,0 +1,42 @@
package kingpin
// Action callback executed at various stages after all values are populated.
// The application, commands, arguments and flags all have corresponding
// actions.
type Action func(*ParseContext) error
type actionMixin struct {
actions []Action
preActions []Action
}
type actionApplier interface {
applyActions(*ParseContext) error
applyPreActions(*ParseContext) error
}
func (a *actionMixin) addAction(action Action) {
a.actions = append(a.actions, action)
}
func (a *actionMixin) addPreAction(action Action) {
a.preActions = append(a.preActions, action)
}
func (a *actionMixin) applyActions(context *ParseContext) error {
for _, action := range a.actions {
if err := action(context); err != nil {
return err
}
}
return nil
}
func (a *actionMixin) applyPreActions(context *ParseContext) error {
for _, preAction := range a.preActions {
if err := preAction(context); err != nil {
return err
}
}
return nil
}

695
vendor/github.com/alecthomas/kingpin/v2/app.go generated vendored Normal file
View file

@ -0,0 +1,695 @@
package kingpin
import (
"fmt"
"io"
"os"
"regexp"
"strings"
"text/template"
)
var (
ErrCommandNotSpecified = fmt.Errorf("command not specified")
)
var (
envarTransformRegexp = regexp.MustCompile(`[^a-zA-Z0-9_]+`)
)
type ApplicationValidator func(*Application) error
// An Application contains the definitions of flags, arguments and commands
// for an application.
type Application struct {
cmdMixin
initialized bool
Name string
Help string
author string
version string
errorWriter io.Writer // Destination for errors.
usageWriter io.Writer // Destination for usage
usageTemplate string
usageFuncs template.FuncMap
validator ApplicationValidator
terminate func(status int) // See Terminate()
noInterspersed bool // can flags be interspersed with args (or must they come first)
defaultEnvars bool
completion bool
// Help flag. Exposed for user customisation.
HelpFlag *FlagClause
// Help command. Exposed for user customisation. May be nil.
HelpCommand *CmdClause
// Version flag. Exposed for user customisation. May be nil.
VersionFlag *FlagClause
}
// New creates a new Kingpin application instance.
func New(name, help string) *Application {
a := &Application{
Name: name,
Help: help,
errorWriter: os.Stderr, // Left for backwards compatibility purposes.
usageWriter: os.Stderr,
usageTemplate: DefaultUsageTemplate,
terminate: os.Exit,
}
a.flagGroup = newFlagGroup()
a.argGroup = newArgGroup()
a.cmdGroup = newCmdGroup(a)
a.HelpFlag = a.Flag("help", "Show context-sensitive help (also try --help-long and --help-man).")
a.HelpFlag.Bool()
a.Flag("help-long", "Generate long help.").Hidden().PreAction(a.generateLongHelp).Bool()
a.Flag("help-man", "Generate a man page.").Hidden().PreAction(a.generateManPage).Bool()
a.Flag("completion-bash", "Output possible completions for the given args.").Hidden().BoolVar(&a.completion)
a.Flag("completion-script-bash", "Generate completion script for bash.").Hidden().PreAction(a.generateBashCompletionScript).Bool()
a.Flag("completion-script-zsh", "Generate completion script for ZSH.").Hidden().PreAction(a.generateZSHCompletionScript).Bool()
return a
}
func (a *Application) generateLongHelp(c *ParseContext) error {
a.Writer(os.Stdout)
if err := a.UsageForContextWithTemplate(c, 2, LongHelpTemplate); err != nil {
return err
}
a.terminate(0)
return nil
}
func (a *Application) generateManPage(c *ParseContext) error {
a.Writer(os.Stdout)
if err := a.UsageForContextWithTemplate(c, 2, ManPageTemplate); err != nil {
return err
}
a.terminate(0)
return nil
}
func (a *Application) generateBashCompletionScript(c *ParseContext) error {
a.Writer(os.Stdout)
if err := a.UsageForContextWithTemplate(c, 2, BashCompletionTemplate); err != nil {
return err
}
a.terminate(0)
return nil
}
func (a *Application) generateZSHCompletionScript(c *ParseContext) error {
a.Writer(os.Stdout)
if err := a.UsageForContextWithTemplate(c, 2, ZshCompletionTemplate); err != nil {
return err
}
a.terminate(0)
return nil
}
// DefaultEnvars configures all flags (that do not already have an associated
// envar) to use a default environment variable in the form "<app>_<flag>".
//
// For example, if the application is named "foo" and a flag is named "bar-
// waz" the environment variable: "FOO_BAR_WAZ".
func (a *Application) DefaultEnvars() *Application {
a.defaultEnvars = true
return a
}
// Terminate specifies the termination handler. Defaults to os.Exit(status).
// If nil is passed, a no-op function will be used.
func (a *Application) Terminate(terminate func(int)) *Application {
if terminate == nil {
terminate = func(int) {}
}
a.terminate = terminate
return a
}
// Writer specifies the writer to use for usage and errors. Defaults to os.Stderr.
// DEPRECATED: See ErrorWriter and UsageWriter.
func (a *Application) Writer(w io.Writer) *Application {
a.errorWriter = w
a.usageWriter = w
return a
}
// ErrorWriter sets the io.Writer to use for errors.
func (a *Application) ErrorWriter(w io.Writer) *Application {
a.errorWriter = w
return a
}
// UsageWriter sets the io.Writer to use for errors.
func (a *Application) UsageWriter(w io.Writer) *Application {
a.usageWriter = w
return a
}
// UsageTemplate specifies the text template to use when displaying usage
// information. The default is UsageTemplate.
func (a *Application) UsageTemplate(template string) *Application {
a.usageTemplate = template
return a
}
// UsageFuncs adds extra functions that can be used in the usage template.
func (a *Application) UsageFuncs(funcs template.FuncMap) *Application {
a.usageFuncs = funcs
return a
}
// Validate sets a validation function to run when parsing.
func (a *Application) Validate(validator ApplicationValidator) *Application {
a.validator = validator
return a
}
// ParseContext parses the given command line and returns the fully populated
// ParseContext.
func (a *Application) ParseContext(args []string) (*ParseContext, error) {
return a.parseContext(false, args)
}
func (a *Application) parseContext(ignoreDefault bool, args []string) (*ParseContext, error) {
if err := a.init(); err != nil {
return nil, err
}
context := tokenize(args, ignoreDefault)
err := parse(context, a)
return context, err
}
// Parse parses command-line arguments. It returns the selected command and an
// error. The selected command will be a space separated subcommand, if
// subcommands have been configured.
//
// This will populate all flag and argument values, call all callbacks, and so
// on.
func (a *Application) Parse(args []string) (command string, err error) {
context, parseErr := a.ParseContext(args)
selected := []string{}
var setValuesErr error
if context == nil {
// Since we do not throw error immediately, there could be a case
// where a context returns nil. Protect against that.
return "", parseErr
}
if err = a.setDefaults(context); err != nil {
return "", err
}
selected, setValuesErr = a.setValues(context)
if err = a.applyPreActions(context, !a.completion); err != nil {
return "", err
}
if a.completion {
a.generateBashCompletion(context)
a.terminate(0)
} else {
if parseErr != nil {
return "", parseErr
}
a.maybeHelp(context)
if !context.EOL() {
return "", fmt.Errorf("unexpected argument '%s'", context.Peek())
}
if setValuesErr != nil {
return "", setValuesErr
}
command, err = a.execute(context, selected)
if err == ErrCommandNotSpecified {
a.writeUsage(context, nil)
}
}
return command, err
}
func (a *Application) writeUsage(context *ParseContext, err error) {
if err != nil {
a.Errorf("%s", err)
}
if err := a.UsageForContext(context); err != nil {
panic(err)
}
if err != nil {
a.terminate(1)
} else {
a.terminate(0)
}
}
func (a *Application) maybeHelp(context *ParseContext) {
for _, element := range context.Elements {
if flag, ok := element.Clause.(*FlagClause); ok && flag == a.HelpFlag {
// Re-parse the command-line ignoring defaults, so that help works correctly.
context, _ = a.parseContext(true, context.rawArgs)
a.writeUsage(context, nil)
}
}
}
// Version adds a --version flag for displaying the application version.
func (a *Application) Version(version string) *Application {
a.version = version
a.VersionFlag = a.Flag("version", "Show application version.").PreAction(func(*ParseContext) error {
fmt.Fprintln(a.usageWriter, version)
a.terminate(0)
return nil
})
a.VersionFlag.Bool()
return a
}
// Author sets the author output by some help templates.
func (a *Application) Author(author string) *Application {
a.author = author
return a
}
// Action callback to call when all values are populated and parsing is
// complete, but before any command, flag or argument actions.
//
// All Action() callbacks are called in the order they are encountered on the
// command line.
func (a *Application) Action(action Action) *Application {
a.addAction(action)
return a
}
// Action called after parsing completes but before validation and execution.
func (a *Application) PreAction(action Action) *Application {
a.addPreAction(action)
return a
}
// Command adds a new top-level command.
func (a *Application) Command(name, help string) *CmdClause {
return a.addCommand(name, help)
}
// Interspersed control if flags can be interspersed with positional arguments
//
// true (the default) means that they can, false means that all the flags must appear before the first positional arguments.
func (a *Application) Interspersed(interspersed bool) *Application {
a.noInterspersed = !interspersed
return a
}
func (a *Application) defaultEnvarPrefix() string {
if a.defaultEnvars {
return a.Name
}
return ""
}
func (a *Application) init() error {
if a.initialized {
return nil
}
if a.cmdGroup.have() && a.argGroup.have() {
return fmt.Errorf("can't mix top-level Arg()s with Command()s")
}
// If we have subcommands, add a help command at the top-level.
if a.cmdGroup.have() {
var command []string
a.HelpCommand = a.Command("help", "Show help.").PreAction(func(context *ParseContext) error {
a.Usage(command)
a.terminate(0)
return nil
})
a.HelpCommand.Arg("command", "Show help on command.").StringsVar(&command)
// Make help first command.
l := len(a.commandOrder)
a.commandOrder = append(a.commandOrder[l-1:l], a.commandOrder[:l-1]...)
}
if err := a.flagGroup.init(a.defaultEnvarPrefix()); err != nil {
return err
}
if err := a.cmdGroup.init(); err != nil {
return err
}
if err := a.argGroup.init(); err != nil {
return err
}
for _, cmd := range a.commands {
if err := cmd.init(); err != nil {
return err
}
}
flagGroups := []*flagGroup{a.flagGroup}
for _, cmd := range a.commandOrder {
if err := checkDuplicateFlags(cmd, flagGroups); err != nil {
return err
}
}
a.initialized = true
return nil
}
// Recursively check commands for duplicate flags.
func checkDuplicateFlags(current *CmdClause, flagGroups []*flagGroup) error {
// Check for duplicates.
for _, flags := range flagGroups {
for _, flag := range current.flagOrder {
if flag.shorthand != 0 {
if _, ok := flags.short[string(flag.shorthand)]; ok {
return fmt.Errorf("duplicate short flag -%c", flag.shorthand)
}
}
if _, ok := flags.long[flag.name]; ok {
return fmt.Errorf("duplicate long flag --%s", flag.name)
}
}
}
flagGroups = append(flagGroups, current.flagGroup)
// Check subcommands.
for _, subcmd := range current.commandOrder {
if err := checkDuplicateFlags(subcmd, flagGroups); err != nil {
return err
}
}
return nil
}
func (a *Application) execute(context *ParseContext, selected []string) (string, error) {
var err error
if err = a.validateRequired(context); err != nil {
return "", err
}
if err = a.applyValidators(context); err != nil {
return "", err
}
if err = a.applyActions(context); err != nil {
return "", err
}
command := strings.Join(selected, " ")
if command == "" && a.cmdGroup.have() {
return "", ErrCommandNotSpecified
}
return command, err
}
func (a *Application) setDefaults(context *ParseContext) error {
flagElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if flag, ok := element.Clause.(*FlagClause); ok {
if flag.name == "help" {
return nil
}
flagElements[flag.name] = element
}
}
argElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if arg, ok := element.Clause.(*ArgClause); ok {
argElements[arg.name] = element
}
}
// Check required flags and set defaults.
for _, flag := range context.flags.long {
if flagElements[flag.name] == nil {
if err := flag.setDefault(); err != nil {
return err
}
}
}
for _, arg := range context.arguments.args {
if argElements[arg.name] == nil {
if err := arg.setDefault(); err != nil {
return err
}
}
}
return nil
}
func (a *Application) validateRequired(context *ParseContext) error {
flagElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if flag, ok := element.Clause.(*FlagClause); ok {
flagElements[flag.name] = element
}
}
argElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if arg, ok := element.Clause.(*ArgClause); ok {
argElements[arg.name] = element
}
}
// Check required flags and set defaults.
for _, flag := range context.flags.long {
if flagElements[flag.name] == nil {
// Check required flags were provided.
if flag.needsValue() {
return fmt.Errorf("required flag --%s not provided", flag.name)
}
}
}
for _, arg := range context.arguments.args {
if argElements[arg.name] == nil {
if arg.needsValue() {
return fmt.Errorf("required argument '%s' not provided", arg.name)
}
}
}
return nil
}
func (a *Application) setValues(context *ParseContext) (selected []string, err error) {
// Set all arg and flag values.
var (
lastCmd *CmdClause
flagSet = map[string]struct{}{}
)
for _, element := range context.Elements {
switch clause := element.Clause.(type) {
case *FlagClause:
if _, ok := flagSet[clause.name]; ok {
if v, ok := clause.value.(repeatableFlag); !ok || !v.IsCumulative() {
return nil, fmt.Errorf("flag '%s' cannot be repeated", clause.name)
}
}
if err = clause.value.Set(*element.Value); err != nil {
return
}
flagSet[clause.name] = struct{}{}
case *ArgClause:
if err = clause.value.Set(*element.Value); err != nil {
return
}
case *CmdClause:
selected = append(selected, clause.name)
lastCmd = clause
}
}
if lastCmd != nil && len(lastCmd.commands) > 0 {
return nil, fmt.Errorf("must select a subcommand of '%s'", lastCmd.FullCommand())
}
return
}
func (a *Application) applyValidators(context *ParseContext) (err error) {
// Call command validation functions.
for _, element := range context.Elements {
if cmd, ok := element.Clause.(*CmdClause); ok && cmd.validator != nil {
if err = cmd.validator(cmd); err != nil {
return err
}
}
}
if a.validator != nil {
err = a.validator(a)
}
return err
}
func (a *Application) applyPreActions(context *ParseContext, dispatch bool) error {
if err := a.actionMixin.applyPreActions(context); err != nil {
return err
}
// Dispatch to actions.
if dispatch {
for _, element := range context.Elements {
if applier, ok := element.Clause.(actionApplier); ok {
if err := applier.applyPreActions(context); err != nil {
return err
}
}
}
}
return nil
}
func (a *Application) applyActions(context *ParseContext) error {
if err := a.actionMixin.applyActions(context); err != nil {
return err
}
// Dispatch to actions.
for _, element := range context.Elements {
if applier, ok := element.Clause.(actionApplier); ok {
if err := applier.applyActions(context); err != nil {
return err
}
}
}
return nil
}
// Errorf prints an error message to w in the format "<appname>: error: <message>".
func (a *Application) Errorf(format string, args ...interface{}) {
fmt.Fprintf(a.errorWriter, a.Name+": error: "+format+"\n", args...)
}
// Fatalf writes a formatted error to w then terminates with exit status 1.
func (a *Application) Fatalf(format string, args ...interface{}) {
a.Errorf(format, args...)
a.terminate(1)
}
// FatalUsage prints an error message followed by usage information, then
// exits with a non-zero status.
func (a *Application) FatalUsage(format string, args ...interface{}) {
a.Errorf(format, args...)
// Force usage to go to error output.
a.usageWriter = a.errorWriter
a.Usage([]string{})
a.terminate(1)
}
// FatalUsageContext writes a printf formatted error message to w, then usage
// information for the given ParseContext, before exiting.
func (a *Application) FatalUsageContext(context *ParseContext, format string, args ...interface{}) {
a.Errorf(format, args...)
if err := a.UsageForContext(context); err != nil {
panic(err)
}
a.terminate(1)
}
// FatalIfError prints an error and exits if err is not nil. The error is printed
// with the given formatted string, if any.
func (a *Application) FatalIfError(err error, format string, args ...interface{}) {
if err != nil {
prefix := ""
if format != "" {
prefix = fmt.Sprintf(format, args...) + ": "
}
a.Errorf(prefix+"%s", err)
a.terminate(1)
}
}
func (a *Application) completionOptions(context *ParseContext) []string {
args := context.rawArgs
var (
currArg string
prevArg string
target cmdMixin
)
numArgs := len(args)
if numArgs > 1 {
args = args[1:]
currArg = args[len(args)-1]
}
if numArgs > 2 {
prevArg = args[len(args)-2]
}
target = a.cmdMixin
if context.SelectedCommand != nil {
// A subcommand was in use. We will use it as the target
target = context.SelectedCommand.cmdMixin
}
if (currArg != "" && strings.HasPrefix(currArg, "--")) || strings.HasPrefix(prevArg, "--") {
if context.argsOnly {
return nil
}
// Perform completion for A flag. The last/current argument started with "-"
var (
flagName string // The name of a flag if given (could be half complete)
flagValue string // The value assigned to a flag (if given) (could be half complete)
)
if strings.HasPrefix(prevArg, "--") && !strings.HasPrefix(currArg, "--") {
// Matches: ./myApp --flag value
// Wont Match: ./myApp --flag --
flagName = prevArg[2:] // Strip the "--"
flagValue = currArg
} else if strings.HasPrefix(currArg, "--") {
// Matches: ./myApp --flag --
// Matches: ./myApp --flag somevalue --
// Matches: ./myApp --
flagName = currArg[2:] // Strip the "--"
}
options, flagMatched, valueMatched := target.FlagCompletion(flagName, flagValue)
if valueMatched {
// Value Matched. Show cmdCompletions
return target.CmdCompletion(context)
}
// Add top level flags if we're not at the top level and no match was found.
if context.SelectedCommand != nil && !flagMatched {
topOptions, topFlagMatched, topValueMatched := a.FlagCompletion(flagName, flagValue)
if topValueMatched {
// Value Matched. Back to cmdCompletions
return target.CmdCompletion(context)
}
if topFlagMatched {
// Top level had a flag which matched the input. Return it's options.
options = topOptions
} else {
// Add top level flags
options = append(options, topOptions...)
}
}
return options
}
// Perform completion for sub commands and arguments.
return target.CmdCompletion(context)
}
func (a *Application) generateBashCompletion(context *ParseContext) {
options := a.completionOptions(context)
fmt.Printf("%s", strings.Join(options, "\n"))
}
func envarTransform(name string) string {
return strings.ToUpper(envarTransformRegexp.ReplaceAllString(name, "_"))
}

205
vendor/github.com/alecthomas/kingpin/v2/args.go generated vendored Normal file
View file

@ -0,0 +1,205 @@
package kingpin
import (
"fmt"
)
type argGroup struct {
args []*ArgClause
}
func newArgGroup() *argGroup {
return &argGroup{}
}
func (a *argGroup) have() bool {
return len(a.args) > 0
}
// GetArg gets an argument definition.
//
// This allows existing arguments to be modified after definition but before parsing. Useful for
// modular applications.
func (a *argGroup) GetArg(name string) *ArgClause {
for _, arg := range a.args {
if arg.name == name {
return arg
}
}
return nil
}
func (a *argGroup) Arg(name, help string) *ArgClause {
arg := newArg(name, help)
a.args = append(a.args, arg)
return arg
}
func (a *argGroup) init() error {
required := 0
seen := map[string]struct{}{}
previousArgMustBeLast := false
for i, arg := range a.args {
if previousArgMustBeLast {
return fmt.Errorf("Args() can't be followed by another argument '%s'", arg.name)
}
if arg.consumesRemainder() {
previousArgMustBeLast = true
}
if _, ok := seen[arg.name]; ok {
return fmt.Errorf("duplicate argument '%s'", arg.name)
}
seen[arg.name] = struct{}{}
if arg.required && required != i {
return fmt.Errorf("required arguments found after non-required")
}
if arg.required {
required++
}
if err := arg.init(); err != nil {
return err
}
}
return nil
}
type ArgClause struct {
actionMixin
parserMixin
completionsMixin
envarMixin
name string
help string
defaultValues []string
placeholder string
hidden bool
required bool
}
func newArg(name, help string) *ArgClause {
a := &ArgClause{
name: name,
help: help,
}
return a
}
func (a *ArgClause) setDefault() error {
if a.HasEnvarValue() {
if v, ok := a.value.(remainderArg); !ok || !v.IsCumulative() {
// Use the value as-is
return a.value.Set(a.GetEnvarValue())
}
for _, value := range a.GetSplitEnvarValue() {
if err := a.value.Set(value); err != nil {
return err
}
}
return nil
}
if len(a.defaultValues) > 0 {
for _, defaultValue := range a.defaultValues {
if err := a.value.Set(defaultValue); err != nil {
return err
}
}
return nil
}
return nil
}
func (a *ArgClause) needsValue() bool {
haveDefault := len(a.defaultValues) > 0
return a.required && !(haveDefault || a.HasEnvarValue())
}
func (a *ArgClause) consumesRemainder() bool {
if r, ok := a.value.(remainderArg); ok {
return r.IsCumulative()
}
return false
}
// Hidden hides the argument from usage but still allows it to be used.
func (a *ArgClause) Hidden() *ArgClause {
a.hidden = true
return a
}
// PlaceHolder sets the place-holder string used for arg values in the help. The
// default behaviour is to use the arg name between < > brackets.
func (a *ArgClause) PlaceHolder(value string) *ArgClause {
a.placeholder = value
return a
}
// Required arguments must be input by the user. They can not have a Default() value provided.
func (a *ArgClause) Required() *ArgClause {
a.required = true
return a
}
// Default values for this argument. They *must* be parseable by the value of the argument.
func (a *ArgClause) Default(values ...string) *ArgClause {
a.defaultValues = values
return a
}
// Envar overrides the default value(s) for a flag from an environment variable,
// if it is set. Several default values can be provided by using new lines to
// separate them.
func (a *ArgClause) Envar(name string) *ArgClause {
a.envar = name
a.noEnvar = false
return a
}
// NoEnvar forces environment variable defaults to be disabled for this flag.
// Most useful in conjunction with app.DefaultEnvars().
func (a *ArgClause) NoEnvar() *ArgClause {
a.envar = ""
a.noEnvar = true
return a
}
func (a *ArgClause) Action(action Action) *ArgClause {
a.addAction(action)
return a
}
func (a *ArgClause) PreAction(action Action) *ArgClause {
a.addPreAction(action)
return a
}
// HintAction registers a HintAction (function) for the arg to provide completions
func (a *ArgClause) HintAction(action HintAction) *ArgClause {
a.addHintAction(action)
return a
}
// HintOptions registers any number of options for the flag to provide completions
func (a *ArgClause) HintOptions(options ...string) *ArgClause {
a.addHintAction(func() []string {
return options
})
return a
}
// Help sets the help message.
func (a *ArgClause) Help(help string) *ArgClause {
a.help = help
return a
}
func (a *ArgClause) init() error {
if a.required && len(a.defaultValues) > 0 {
return fmt.Errorf("required argument '%s' with unusable default value", a.name)
}
if a.value == nil {
return fmt.Errorf("no parser defined for arg '%s'", a.name)
}
return nil
}

325
vendor/github.com/alecthomas/kingpin/v2/cmd.go generated vendored Normal file
View file

@ -0,0 +1,325 @@
package kingpin
import (
"fmt"
"strings"
)
type cmdMixin struct {
*flagGroup
*argGroup
*cmdGroup
actionMixin
}
// CmdCompletion returns completion options for arguments, if that's where
// parsing left off, or commands if there aren't any unsatisfied args.
func (c *cmdMixin) CmdCompletion(context *ParseContext) []string {
var options []string
// Count args already satisfied - we won't complete those, and add any
// default commands' alternatives, since they weren't listed explicitly
// and the user may want to explicitly list something else.
argsSatisfied := 0
allSatisfied := false
ElementLoop:
for _, el := range context.Elements {
switch clause := el.Clause.(type) {
case *ArgClause:
// Each new element should reset the previous state
allSatisfied = false
options = nil
if el.Value != nil && *el.Value != "" {
// Get the list of valid options for the last argument
validOptions := c.argGroup.args[argsSatisfied].resolveCompletions()
if len(validOptions) == 0 {
// If there are no options for this argument,
// mark is as allSatisfied as we can't suggest anything
if !clause.consumesRemainder() {
argsSatisfied++
allSatisfied = true
}
continue ElementLoop
}
for _, opt := range validOptions {
if opt == *el.Value {
// We have an exact match
// We don't need to suggest any option
if !clause.consumesRemainder() {
argsSatisfied++
}
continue ElementLoop
}
if strings.HasPrefix(opt, *el.Value) {
// If the option match the partially entered argument, add it to the list
options = append(options, opt)
}
}
// Avoid further completion as we have done everything we could
if !clause.consumesRemainder() {
argsSatisfied++
allSatisfied = true
}
}
case *CmdClause:
options = append(options, clause.completionAlts...)
default:
}
}
if argsSatisfied < len(c.argGroup.args) && !allSatisfied {
// Since not all args have been satisfied, show options for the current one
options = append(options, c.argGroup.args[argsSatisfied].resolveCompletions()...)
} else {
// If all args are satisfied, then go back to completing commands
for _, cmd := range c.cmdGroup.commandOrder {
if !cmd.hidden {
options = append(options, cmd.name)
}
}
}
return options
}
func (c *cmdMixin) FlagCompletion(flagName string, flagValue string) (choices []string, flagMatch bool, optionMatch bool) {
// Check if flagName matches a known flag.
// If it does, show the options for the flag
// Otherwise, show all flags
options := []string{}
for _, flag := range c.flagGroup.flagOrder {
// Loop through each flag and determine if a match exists
if flag.name == flagName {
// User typed entire flag. Need to look for flag options.
options = flag.resolveCompletions()
if len(options) == 0 {
// No Options to Choose From, Assume Match.
return options, true, true
}
// Loop options to find if the user specified value matches
isPrefix := false
matched := false
for _, opt := range options {
if flagValue == opt {
matched = true
} else if strings.HasPrefix(opt, flagValue) {
isPrefix = true
}
}
// Matched Flag Directly
// Flag Value Not Prefixed, and Matched Directly
return options, true, !isPrefix && matched
}
if !flag.hidden {
options = append(options, "--"+flag.name)
}
}
// No Flag directly matched.
return options, false, false
}
type cmdGroup struct {
app *Application
parent *CmdClause
commands map[string]*CmdClause
commandOrder []*CmdClause
}
func (c *cmdGroup) defaultSubcommand() *CmdClause {
for _, cmd := range c.commandOrder {
if cmd.isDefault {
return cmd
}
}
return nil
}
func (c *cmdGroup) cmdNames() []string {
names := make([]string, 0, len(c.commandOrder))
for _, cmd := range c.commandOrder {
names = append(names, cmd.name)
}
return names
}
// GetArg gets a command definition.
//
// This allows existing commands to be modified after definition but before parsing. Useful for
// modular applications.
func (c *cmdGroup) GetCommand(name string) *CmdClause {
return c.commands[name]
}
func newCmdGroup(app *Application) *cmdGroup {
return &cmdGroup{
app: app,
commands: make(map[string]*CmdClause),
}
}
func (c *cmdGroup) flattenedCommands() (out []*CmdClause) {
for _, cmd := range c.commandOrder {
if len(cmd.commands) == 0 {
out = append(out, cmd)
}
out = append(out, cmd.flattenedCommands()...)
}
return
}
func (c *cmdGroup) addCommand(name, help string) *CmdClause {
cmd := newCommand(c.app, name, help)
c.commands[name] = cmd
c.commandOrder = append(c.commandOrder, cmd)
return cmd
}
func (c *cmdGroup) init() error {
seen := map[string]bool{}
if c.defaultSubcommand() != nil && !c.have() {
return fmt.Errorf("default subcommand %q provided but no subcommands defined", c.defaultSubcommand().name)
}
defaults := []string{}
for _, cmd := range c.commandOrder {
if cmd.isDefault {
defaults = append(defaults, cmd.name)
}
if seen[cmd.name] {
return fmt.Errorf("duplicate command %q", cmd.name)
}
seen[cmd.name] = true
for _, alias := range cmd.aliases {
if seen[alias] {
return fmt.Errorf("alias duplicates existing command %q", alias)
}
c.commands[alias] = cmd
}
if err := cmd.init(); err != nil {
return err
}
}
if len(defaults) > 1 {
return fmt.Errorf("more than one default subcommand exists: %s", strings.Join(defaults, ", "))
}
return nil
}
func (c *cmdGroup) have() bool {
return len(c.commands) > 0
}
type CmdClauseValidator func(*CmdClause) error
// A CmdClause is a single top-level command. It encapsulates a set of flags
// and either subcommands or positional arguments.
type CmdClause struct {
cmdMixin
app *Application
name string
aliases []string
help string
helpLong string
isDefault bool
validator CmdClauseValidator
hidden bool
completionAlts []string
}
func newCommand(app *Application, name, help string) *CmdClause {
c := &CmdClause{
app: app,
name: name,
help: help,
}
c.flagGroup = newFlagGroup()
c.argGroup = newArgGroup()
c.cmdGroup = newCmdGroup(app)
return c
}
// Add an Alias for this command.
func (c *CmdClause) Alias(name string) *CmdClause {
c.aliases = append(c.aliases, name)
return c
}
// Validate sets a validation function to run when parsing.
func (c *CmdClause) Validate(validator CmdClauseValidator) *CmdClause {
c.validator = validator
return c
}
func (c *CmdClause) FullCommand() string {
out := []string{c.name}
for p := c.parent; p != nil; p = p.parent {
out = append([]string{p.name}, out...)
}
return strings.Join(out, " ")
}
// Command adds a new sub-command.
func (c *CmdClause) Command(name, help string) *CmdClause {
cmd := c.addCommand(name, help)
cmd.parent = c
return cmd
}
// Default makes this command the default if commands don't match.
func (c *CmdClause) Default() *CmdClause {
c.isDefault = true
return c
}
func (c *CmdClause) Action(action Action) *CmdClause {
c.addAction(action)
return c
}
func (c *CmdClause) PreAction(action Action) *CmdClause {
c.addPreAction(action)
return c
}
// Help sets the help message.
func (c *CmdClause) Help(help string) *CmdClause {
c.help = help
return c
}
func (c *CmdClause) init() error {
if err := c.flagGroup.init(c.app.defaultEnvarPrefix()); err != nil {
return err
}
if c.argGroup.have() && c.cmdGroup.have() {
return fmt.Errorf("can't mix Arg()s with Command()s")
}
if err := c.argGroup.init(); err != nil {
return err
}
if err := c.cmdGroup.init(); err != nil {
return err
}
return nil
}
func (c *CmdClause) Hidden() *CmdClause {
c.hidden = true
return c
}
// HelpLong adds a long help text, which can be used in usage templates.
// For example, to use a longer help text in the command-specific help
// than in the apps root help.
func (c *CmdClause) HelpLong(help string) *CmdClause {
c.helpLong = help
return c
}

33
vendor/github.com/alecthomas/kingpin/v2/completions.go generated vendored Normal file
View file

@ -0,0 +1,33 @@
package kingpin
// HintAction is a function type who is expected to return a slice of possible
// command line arguments.
type HintAction func() []string
type completionsMixin struct {
hintActions []HintAction
builtinHintActions []HintAction
}
func (a *completionsMixin) addHintAction(action HintAction) {
a.hintActions = append(a.hintActions, action)
}
// Allow adding of HintActions which are added internally, ie, EnumVar
func (a *completionsMixin) addHintActionBuiltin(action HintAction) {
a.builtinHintActions = append(a.builtinHintActions, action)
}
func (a *completionsMixin) resolveCompletions() []string {
var hints []string
options := a.builtinHintActions
if len(a.hintActions) > 0 {
// User specified their own hintActions. Use those instead.
options = a.hintActions
}
for _, hintAction := range options {
hints = append(hints, hintAction()...)
}
return hints
}

68
vendor/github.com/alecthomas/kingpin/v2/doc.go generated vendored Normal file
View file

@ -0,0 +1,68 @@
// Package kingpin provides command line interfaces like this:
//
// $ chat
// usage: chat [<flags>] <command> [<flags>] [<args> ...]
//
// Flags:
// --debug enable debug mode
// --help Show help.
// --server=127.0.0.1 server address
//
// Commands:
// help <command>
// Show help for a command.
//
// post [<flags>] <channel>
// Post a message to a channel.
//
// register <nick> <name>
// Register a new user.
//
// $ chat help post
// usage: chat [<flags>] post [<flags>] <channel> [<text>]
//
// Post a message to a channel.
//
// Flags:
// --image=IMAGE image to post
//
// Args:
// <channel> channel to post to
// [<text>] text to post
// $ chat post --image=~/Downloads/owls.jpg pics
//
// From code like this:
//
// package main
//
// import "github.com/alecthomas/kingpin/v2"
//
// var (
// debug = kingpin.Flag("debug", "enable debug mode").Default("false").Bool()
// serverIP = kingpin.Flag("server", "server address").Default("127.0.0.1").IP()
//
// register = kingpin.Command("register", "Register a new user.")
// registerNick = register.Arg("nick", "nickname for user").Required().String()
// registerName = register.Arg("name", "name of user").Required().String()
//
// post = kingpin.Command("post", "Post a message to a channel.")
// postImage = post.Flag("image", "image to post").ExistingFile()
// postChannel = post.Arg("channel", "channel to post to").Required().String()
// postText = post.Arg("text", "text to post").String()
// )
//
// func main() {
// switch kingpin.Parse() {
// // Register user
// case "register":
// println(*registerNick)
//
// // Post message
// case "post":
// if *postImage != nil {
// }
// if *postText != "" {
// }
// }
// }
package kingpin

40
vendor/github.com/alecthomas/kingpin/v2/envar.go generated vendored Normal file
View file

@ -0,0 +1,40 @@
package kingpin
import (
"os"
"regexp"
)
var (
envVarValuesSeparator = "\r?\n"
envVarValuesTrimmer = regexp.MustCompile(envVarValuesSeparator + "$")
envVarValuesSplitter = regexp.MustCompile(envVarValuesSeparator)
)
type envarMixin struct {
envar string
noEnvar bool
}
func (e *envarMixin) HasEnvarValue() bool {
return e.GetEnvarValue() != ""
}
func (e *envarMixin) GetEnvarValue() string {
if e.noEnvar || e.envar == "" {
return ""
}
return os.Getenv(e.envar)
}
func (e *envarMixin) GetSplitEnvarValue() []string {
envarValue := e.GetEnvarValue()
if envarValue == "" {
return []string{}
}
// Split by new line to extract multiple values, if any.
trimmed := envVarValuesTrimmer.ReplaceAllString(envarValue, "")
return envVarValuesSplitter.Split(trimmed, -1)
}

332
vendor/github.com/alecthomas/kingpin/v2/flags.go generated vendored Normal file
View file

@ -0,0 +1,332 @@
package kingpin
import (
"fmt"
"strings"
)
type flagGroup struct {
short map[string]*FlagClause
long map[string]*FlagClause
flagOrder []*FlagClause
}
func newFlagGroup() *flagGroup {
return &flagGroup{
short: map[string]*FlagClause{},
long: map[string]*FlagClause{},
}
}
// GetFlag gets a flag definition.
//
// This allows existing flags to be modified after definition but before parsing. Useful for
// modular applications.
func (f *flagGroup) GetFlag(name string) *FlagClause {
return f.long[name]
}
// Flag defines a new flag with the given long name and help.
func (f *flagGroup) Flag(name, help string) *FlagClause {
flag := newFlag(name, help)
f.long[name] = flag
f.flagOrder = append(f.flagOrder, flag)
return flag
}
func (f *flagGroup) init(defaultEnvarPrefix string) error {
if err := f.checkDuplicates(); err != nil {
return err
}
for _, flag := range f.long {
if defaultEnvarPrefix != "" && !flag.noEnvar && flag.envar == "" {
flag.envar = envarTransform(defaultEnvarPrefix + "_" + flag.name)
}
if err := flag.init(); err != nil {
return err
}
if flag.shorthand != 0 {
f.short[string(flag.shorthand)] = flag
}
}
return nil
}
func (f *flagGroup) checkDuplicates() error {
seenShort := map[rune]bool{}
seenLong := map[string]bool{}
for _, flag := range f.flagOrder {
if flag.shorthand != 0 {
if _, ok := seenShort[flag.shorthand]; ok {
return fmt.Errorf("duplicate short flag -%c", flag.shorthand)
}
seenShort[flag.shorthand] = true
}
if _, ok := seenLong[flag.name]; ok {
return fmt.Errorf("duplicate long flag --%s", flag.name)
}
seenLong[flag.name] = true
}
return nil
}
func (f *flagGroup) parse(context *ParseContext) (*FlagClause, error) {
var token *Token
loop:
for {
token = context.Peek()
switch token.Type {
case TokenEOL:
break loop
case TokenLong, TokenShort:
flagToken := token
defaultValue := ""
var flag *FlagClause
var ok bool
invert := false
name := token.Value
if token.Type == TokenLong {
flag, ok = f.long[name]
if !ok {
if strings.HasPrefix(name, "no-") {
name = name[3:]
invert = true
}
flag, ok = f.long[name]
}
if !ok {
return nil, fmt.Errorf("unknown long flag '%s'", flagToken)
}
} else {
flag, ok = f.short[name]
if !ok {
return nil, fmt.Errorf("unknown short flag '%s'", flagToken)
}
}
context.Next()
flag.isSetByUser()
fb, ok := flag.value.(boolFlag)
if ok && fb.IsBoolFlag() {
if invert {
defaultValue = "false"
} else {
defaultValue = "true"
}
} else {
if invert {
context.Push(token)
return nil, fmt.Errorf("unknown long flag '%s'", flagToken)
}
token = context.Peek()
if token.Type != TokenArg {
context.Push(token)
return nil, fmt.Errorf("expected argument for flag '%s'", flagToken)
}
context.Next()
defaultValue = token.Value
}
context.matchedFlag(flag, defaultValue)
return flag, nil
default:
break loop
}
}
return nil, nil
}
// FlagClause is a fluid interface used to build flags.
type FlagClause struct {
parserMixin
actionMixin
completionsMixin
envarMixin
name string
shorthand rune
help string
defaultValues []string
placeholder string
hidden bool
setByUser *bool
}
func newFlag(name, help string) *FlagClause {
f := &FlagClause{
name: name,
help: help,
}
return f
}
func (f *FlagClause) setDefault() error {
if f.HasEnvarValue() {
if v, ok := f.value.(repeatableFlag); !ok || !v.IsCumulative() {
// Use the value as-is
return f.value.Set(f.GetEnvarValue())
} else {
for _, value := range f.GetSplitEnvarValue() {
if err := f.value.Set(value); err != nil {
return err
}
}
return nil
}
}
if len(f.defaultValues) > 0 {
for _, defaultValue := range f.defaultValues {
if err := f.value.Set(defaultValue); err != nil {
return err
}
}
return nil
}
return nil
}
func (f *FlagClause) isSetByUser() {
if f.setByUser != nil {
*f.setByUser = true
}
}
func (f *FlagClause) needsValue() bool {
haveDefault := len(f.defaultValues) > 0
return f.required && !(haveDefault || f.HasEnvarValue())
}
func (f *FlagClause) init() error {
if f.required && len(f.defaultValues) > 0 {
return fmt.Errorf("required flag '--%s' with default value that will never be used", f.name)
}
if f.value == nil {
return fmt.Errorf("no type defined for --%s (eg. .String())", f.name)
}
if v, ok := f.value.(repeatableFlag); (!ok || !v.IsCumulative()) && len(f.defaultValues) > 1 {
return fmt.Errorf("invalid default for '--%s', expecting single value", f.name)
}
return nil
}
// Dispatch to the given function after the flag is parsed and validated.
func (f *FlagClause) Action(action Action) *FlagClause {
f.addAction(action)
return f
}
func (f *FlagClause) PreAction(action Action) *FlagClause {
f.addPreAction(action)
return f
}
// HintAction registers a HintAction (function) for the flag to provide completions
func (a *FlagClause) HintAction(action HintAction) *FlagClause {
a.addHintAction(action)
return a
}
// HintOptions registers any number of options for the flag to provide completions
func (a *FlagClause) HintOptions(options ...string) *FlagClause {
a.addHintAction(func() []string {
return options
})
return a
}
func (a *FlagClause) EnumVar(target *string, options ...string) {
a.parserMixin.EnumVar(target, options...)
a.addHintActionBuiltin(func() []string {
return options
})
}
func (a *FlagClause) Enum(options ...string) (target *string) {
a.addHintActionBuiltin(func() []string {
return options
})
return a.parserMixin.Enum(options...)
}
// IsSetByUser let to know if the flag was set by the user
func (f *FlagClause) IsSetByUser(setByUser *bool) *FlagClause {
if setByUser != nil {
*setByUser = false
}
f.setByUser = setByUser
return f
}
// Default values for this flag. They *must* be parseable by the value of the flag.
func (f *FlagClause) Default(values ...string) *FlagClause {
f.defaultValues = values
return f
}
// DEPRECATED: Use Envar(name) instead.
func (f *FlagClause) OverrideDefaultFromEnvar(envar string) *FlagClause {
return f.Envar(envar)
}
// Envar overrides the default value(s) for a flag from an environment variable,
// if it is set. Several default values can be provided by using new lines to
// separate them.
func (f *FlagClause) Envar(name string) *FlagClause {
f.envar = name
f.noEnvar = false
return f
}
// NoEnvar forces environment variable defaults to be disabled for this flag.
// Most useful in conjunction with app.DefaultEnvars().
func (f *FlagClause) NoEnvar() *FlagClause {
f.envar = ""
f.noEnvar = true
return f
}
// PlaceHolder sets the place-holder string used for flag values in the help. The
// default behaviour is to use the value provided by Default() if provided,
// then fall back on the capitalized flag name.
func (f *FlagClause) PlaceHolder(placeholder string) *FlagClause {
f.placeholder = placeholder
return f
}
// Hidden hides a flag from usage but still allows it to be used.
func (f *FlagClause) Hidden() *FlagClause {
f.hidden = true
return f
}
// Required makes the flag required. You can not provide a Default() value to a Required() flag.
func (f *FlagClause) Required() *FlagClause {
f.required = true
return f
}
// Short sets the short flag name.
func (f *FlagClause) Short(name rune) *FlagClause {
f.shorthand = name
return f
}
// Help sets the help message.
func (f *FlagClause) Help(help string) *FlagClause {
f.help = help
return f
}
// Bool makes this flag a boolean flag.
func (f *FlagClause) Bool() (target *bool) {
target = new(bool)
f.SetValue(newBoolValue(target))
return
}

96
vendor/github.com/alecthomas/kingpin/v2/global.go generated vendored Normal file
View file

@ -0,0 +1,96 @@
package kingpin
import (
"os"
"path/filepath"
)
var (
// CommandLine is the default Kingpin parser.
CommandLine = New(filepath.Base(os.Args[0]), "")
// Global help flag. Exposed for user customisation.
HelpFlag = CommandLine.HelpFlag
// Top-level help command. Exposed for user customisation. May be nil.
HelpCommand = CommandLine.HelpCommand
// Global version flag. Exposed for user customisation. May be nil.
VersionFlag = CommandLine.VersionFlag
// Whether to file expansion with '@' is enabled.
EnableFileExpansion = true
)
// Command adds a new command to the default parser.
func Command(name, help string) *CmdClause {
return CommandLine.Command(name, help)
}
// Flag adds a new flag to the default parser.
func Flag(name, help string) *FlagClause {
return CommandLine.Flag(name, help)
}
// Arg adds a new argument to the top-level of the default parser.
func Arg(name, help string) *ArgClause {
return CommandLine.Arg(name, help)
}
// Parse and return the selected command. Will call the termination handler if
// an error is encountered.
func Parse() string {
selected := MustParse(CommandLine.Parse(os.Args[1:]))
if selected == "" && CommandLine.cmdGroup.have() {
Usage()
CommandLine.terminate(0)
}
return selected
}
// Errorf prints an error message to stderr.
func Errorf(format string, args ...interface{}) {
CommandLine.Errorf(format, args...)
}
// Fatalf prints an error message to stderr and exits.
func Fatalf(format string, args ...interface{}) {
CommandLine.Fatalf(format, args...)
}
// FatalIfError prints an error and exits if err is not nil. The error is printed
// with the given prefix.
func FatalIfError(err error, format string, args ...interface{}) {
CommandLine.FatalIfError(err, format, args...)
}
// FatalUsage prints an error message followed by usage information, then
// exits with a non-zero status.
func FatalUsage(format string, args ...interface{}) {
CommandLine.FatalUsage(format, args...)
}
// FatalUsageContext writes a printf formatted error message to stderr, then
// usage information for the given ParseContext, before exiting.
func FatalUsageContext(context *ParseContext, format string, args ...interface{}) {
CommandLine.FatalUsageContext(context, format, args...)
}
// Usage prints usage to stderr.
func Usage() {
CommandLine.Usage(os.Args[1:])
}
// Set global usage template to use (defaults to DefaultUsageTemplate).
func UsageTemplate(template string) *Application {
return CommandLine.UsageTemplate(template)
}
// MustParse can be used with app.Parse(args) to exit with an error if parsing fails.
func MustParse(command string, err error) string {
if err != nil {
Fatalf("%s, try --help", err)
}
return command
}
// Version adds a flag for displaying the application version number.
func Version(version string) *Application {
return CommandLine.Version(version)
}

View file

@ -0,0 +1,9 @@
// +build appengine !linux,!freebsd,!darwin,!dragonfly,!netbsd,!openbsd
package kingpin
import "io"
func guessWidth(w io.Writer) int {
return 80
}

View file

@ -0,0 +1,38 @@
// +build !appengine,linux freebsd darwin dragonfly netbsd openbsd
package kingpin
import (
"io"
"os"
"strconv"
"syscall"
"unsafe"
)
func guessWidth(w io.Writer) int {
// check if COLUMNS env is set to comply with
// http://pubs.opengroup.org/onlinepubs/009604499/basedefs/xbd_chap08.html
colsStr := os.Getenv("COLUMNS")
if colsStr != "" {
if cols, err := strconv.Atoi(colsStr); err == nil {
return cols
}
}
if t, ok := w.(*os.File); ok {
fd := t.Fd()
var dimensions [4]uint16
if _, _, err := syscall.Syscall6(
syscall.SYS_IOCTL,
uintptr(fd),
uintptr(syscall.TIOCGWINSZ),
uintptr(unsafe.Pointer(&dimensions)),
0, 0, 0,
); err == 0 {
return int(dimensions[1])
}
}
return 80
}

273
vendor/github.com/alecthomas/kingpin/v2/model.go generated vendored Normal file
View file

@ -0,0 +1,273 @@
package kingpin
import (
"fmt"
"strconv"
"strings"
)
// Data model for Kingpin command-line structure.
var (
ignoreInCount = map[string]bool{
"help": true,
"help-long": true,
"help-man": true,
"completion-bash": true,
"completion-script-bash": true,
"completion-script-zsh": true,
}
)
type FlagGroupModel struct {
Flags []*FlagModel
}
func (f *FlagGroupModel) FlagSummary() string {
out := []string{}
count := 0
for _, flag := range f.Flags {
if !ignoreInCount[flag.Name] {
count++
}
if flag.Required {
if flag.IsBoolFlag() {
out = append(out, fmt.Sprintf("--[no-]%s", flag.Name))
} else {
out = append(out, fmt.Sprintf("--%s=%s", flag.Name, flag.FormatPlaceHolder()))
}
}
}
if count != len(out) {
out = append(out, "[<flags>]")
}
return strings.Join(out, " ")
}
type FlagModel struct {
Name string
Help string
Short rune
Default []string
Envar string
PlaceHolder string
Required bool
Hidden bool
Value Value
}
func (f *FlagModel) String() string {
if f.Value == nil {
return ""
}
return f.Value.String()
}
func (f *FlagModel) IsBoolFlag() bool {
if fl, ok := f.Value.(boolFlag); ok {
return fl.IsBoolFlag()
}
return false
}
func (f *FlagModel) FormatPlaceHolder() string {
if f.PlaceHolder != "" {
return f.PlaceHolder
}
if len(f.Default) > 0 {
ellipsis := ""
if len(f.Default) > 1 {
ellipsis = "..."
}
if _, ok := f.Value.(*stringValue); ok {
return strconv.Quote(f.Default[0]) + ellipsis
}
return f.Default[0] + ellipsis
}
return strings.ToUpper(f.Name)
}
func (f *FlagModel) HelpWithEnvar() string {
if f.Envar == "" {
return f.Help
}
return fmt.Sprintf("%s ($%s)", f.Help, f.Envar)
}
type ArgGroupModel struct {
Args []*ArgModel
}
func (a *ArgGroupModel) ArgSummary() string {
depth := 0
out := []string{}
for _, arg := range a.Args {
var h string
if arg.PlaceHolder != "" {
h = arg.PlaceHolder
} else {
h = "<" + arg.Name + ">"
}
if !arg.Required {
h = "[" + h
depth++
}
out = append(out, h)
}
out[len(out)-1] = out[len(out)-1] + strings.Repeat("]", depth)
return strings.Join(out, " ")
}
func (a *ArgModel) HelpWithEnvar() string {
if a.Envar == "" {
return a.Help
}
return fmt.Sprintf("%s ($%s)", a.Help, a.Envar)
}
type ArgModel struct {
Name string
Help string
Default []string
Envar string
PlaceHolder string
Required bool
Hidden bool
Value Value
}
func (a *ArgModel) String() string {
if a.Value == nil {
return ""
}
return a.Value.String()
}
type CmdGroupModel struct {
Commands []*CmdModel
}
func (c *CmdGroupModel) FlattenedCommands() (out []*CmdModel) {
for _, cmd := range c.Commands {
if len(cmd.Commands) == 0 {
out = append(out, cmd)
}
out = append(out, cmd.FlattenedCommands()...)
}
return
}
type CmdModel struct {
Name string
Aliases []string
Help string
HelpLong string
FullCommand string
Depth int
Hidden bool
Default bool
*FlagGroupModel
*ArgGroupModel
*CmdGroupModel
}
func (c *CmdModel) String() string {
return c.FullCommand
}
type ApplicationModel struct {
Name string
Help string
Version string
Author string
*ArgGroupModel
*CmdGroupModel
*FlagGroupModel
}
func (a *Application) Model() *ApplicationModel {
return &ApplicationModel{
Name: a.Name,
Help: a.Help,
Version: a.version,
Author: a.author,
FlagGroupModel: a.flagGroup.Model(),
ArgGroupModel: a.argGroup.Model(),
CmdGroupModel: a.cmdGroup.Model(),
}
}
func (a *argGroup) Model() *ArgGroupModel {
m := &ArgGroupModel{}
for _, arg := range a.args {
m.Args = append(m.Args, arg.Model())
}
return m
}
func (a *ArgClause) Model() *ArgModel {
return &ArgModel{
Name: a.name,
Help: a.help,
Default: a.defaultValues,
Envar: a.envar,
PlaceHolder: a.placeholder,
Required: a.required,
Hidden: a.hidden,
Value: a.value,
}
}
func (f *flagGroup) Model() *FlagGroupModel {
m := &FlagGroupModel{}
for _, fl := range f.flagOrder {
m.Flags = append(m.Flags, fl.Model())
}
return m
}
func (f *FlagClause) Model() *FlagModel {
return &FlagModel{
Name: f.name,
Help: f.help,
Short: rune(f.shorthand),
Default: f.defaultValues,
Envar: f.envar,
PlaceHolder: f.placeholder,
Required: f.required,
Hidden: f.hidden,
Value: f.value,
}
}
func (c *cmdGroup) Model() *CmdGroupModel {
m := &CmdGroupModel{}
for _, cm := range c.commandOrder {
m.Commands = append(m.Commands, cm.Model())
}
return m
}
func (c *CmdClause) Model() *CmdModel {
depth := 0
for i := c; i != nil; i = i.parent {
depth++
}
return &CmdModel{
Name: c.name,
Aliases: c.aliases,
Help: c.help,
HelpLong: c.helpLong,
Depth: depth,
Hidden: c.hidden,
Default: c.isDefault,
FullCommand: c.FullCommand(),
FlagGroupModel: c.flagGroup.Model(),
ArgGroupModel: c.argGroup.Model(),
CmdGroupModel: c.cmdGroup.Model(),
}
}

396
vendor/github.com/alecthomas/kingpin/v2/parser.go generated vendored Normal file
View file

@ -0,0 +1,396 @@
package kingpin
import (
"bufio"
"fmt"
"os"
"strings"
"unicode/utf8"
)
type TokenType int
// Token types.
const (
TokenShort TokenType = iota
TokenLong
TokenArg
TokenError
TokenEOL
)
func (t TokenType) String() string {
switch t {
case TokenShort:
return "short flag"
case TokenLong:
return "long flag"
case TokenArg:
return "argument"
case TokenError:
return "error"
case TokenEOL:
return "<EOL>"
}
return "?"
}
var (
TokenEOLMarker = Token{-1, TokenEOL, ""}
)
type Token struct {
Index int
Type TokenType
Value string
}
func (t *Token) Equal(o *Token) bool {
return t.Index == o.Index
}
func (t *Token) IsFlag() bool {
return t.Type == TokenShort || t.Type == TokenLong
}
func (t *Token) IsEOF() bool {
return t.Type == TokenEOL
}
func (t *Token) String() string {
switch t.Type {
case TokenShort:
return "-" + t.Value
case TokenLong:
return "--" + t.Value
case TokenArg:
return t.Value
case TokenError:
return "error: " + t.Value
case TokenEOL:
return "<EOL>"
default:
panic("unhandled type")
}
}
// A union of possible elements in a parse stack.
type ParseElement struct {
// Clause is either *CmdClause, *ArgClause or *FlagClause.
Clause interface{}
// Value is corresponding value for an ArgClause or FlagClause (if any).
Value *string
}
// ParseContext holds the current context of the parser. When passed to
// Action() callbacks Elements will be fully populated with *FlagClause,
// *ArgClause and *CmdClause values and their corresponding arguments (if
// any).
type ParseContext struct {
SelectedCommand *CmdClause
ignoreDefault bool
argsOnly bool
peek []*Token
argi int // Index of current command-line arg we're processing.
args []string
rawArgs []string
flags *flagGroup
arguments *argGroup
argumenti int // Cursor into arguments
// Flags, arguments and commands encountered and collected during parse.
Elements []*ParseElement
}
func (p *ParseContext) nextArg() *ArgClause {
if p.argumenti >= len(p.arguments.args) {
return nil
}
arg := p.arguments.args[p.argumenti]
if !arg.consumesRemainder() {
p.argumenti++
}
return arg
}
func (p *ParseContext) next() {
p.argi++
p.args = p.args[1:]
}
// HasTrailingArgs returns true if there are unparsed command-line arguments.
// This can occur if the parser can not match remaining arguments.
func (p *ParseContext) HasTrailingArgs() bool {
return len(p.args) > 0
}
func tokenize(args []string, ignoreDefault bool) *ParseContext {
return &ParseContext{
ignoreDefault: ignoreDefault,
args: args,
rawArgs: args,
flags: newFlagGroup(),
arguments: newArgGroup(),
}
}
func (p *ParseContext) mergeFlags(flags *flagGroup) {
for _, flag := range flags.flagOrder {
if flag.shorthand != 0 {
p.flags.short[string(flag.shorthand)] = flag
}
p.flags.long[flag.name] = flag
p.flags.flagOrder = append(p.flags.flagOrder, flag)
}
}
func (p *ParseContext) mergeArgs(args *argGroup) {
p.arguments.args = append(p.arguments.args, args.args...)
}
func (p *ParseContext) EOL() bool {
return p.Peek().Type == TokenEOL
}
func (p *ParseContext) Error() bool {
return p.Peek().Type == TokenError
}
// Next token in the parse context.
func (p *ParseContext) Next() *Token {
if len(p.peek) > 0 {
return p.pop()
}
// End of tokens.
if len(p.args) == 0 {
return &Token{Index: p.argi, Type: TokenEOL}
}
if p.argi > 0 && p.argi <= len(p.rawArgs) && p.rawArgs[p.argi-1] == "--" {
// If the previous argument was a --, from now on only arguments are parsed.
p.argsOnly = true
}
arg := p.args[0]
p.next()
if p.argsOnly {
return &Token{p.argi, TokenArg, arg}
}
if arg == "--" {
return p.Next()
}
if strings.HasPrefix(arg, "--") {
parts := strings.SplitN(arg[2:], "=", 2)
token := &Token{p.argi, TokenLong, parts[0]}
if len(parts) == 2 {
p.Push(&Token{p.argi, TokenArg, parts[1]})
}
return token
}
if strings.HasPrefix(arg, "-") {
if len(arg) == 1 {
return &Token{Index: p.argi, Type: TokenArg}
}
shortRune, size := utf8.DecodeRuneInString(arg[1:])
short := string(shortRune)
flag, ok := p.flags.short[short]
// Not a known short flag, we'll just return it anyway.
if !ok {
} else if fb, ok := flag.value.(boolFlag); ok && fb.IsBoolFlag() {
// Bool short flag.
} else {
// Short flag with combined argument: -fARG
token := &Token{p.argi, TokenShort, short}
if len(arg) > size+1 {
p.Push(&Token{p.argi, TokenArg, arg[size+1:]})
}
return token
}
if len(arg) > size+1 {
p.args = append([]string{"-" + arg[size+1:]}, p.args...)
}
return &Token{p.argi, TokenShort, short}
} else if EnableFileExpansion && strings.HasPrefix(arg, "@") {
expanded, err := ExpandArgsFromFile(arg[1:])
if err != nil {
return &Token{p.argi, TokenError, err.Error()}
}
if len(p.args) == 0 {
p.args = expanded
} else {
p.args = append(expanded, p.args...)
}
return p.Next()
}
return &Token{p.argi, TokenArg, arg}
}
func (p *ParseContext) Peek() *Token {
if len(p.peek) == 0 {
return p.Push(p.Next())
}
return p.peek[len(p.peek)-1]
}
func (p *ParseContext) Push(token *Token) *Token {
p.peek = append(p.peek, token)
return token
}
func (p *ParseContext) pop() *Token {
end := len(p.peek) - 1
token := p.peek[end]
p.peek = p.peek[0:end]
return token
}
func (p *ParseContext) String() string {
return p.SelectedCommand.FullCommand()
}
func (p *ParseContext) matchedFlag(flag *FlagClause, value string) {
p.Elements = append(p.Elements, &ParseElement{Clause: flag, Value: &value})
}
func (p *ParseContext) matchedArg(arg *ArgClause, value string) {
p.Elements = append(p.Elements, &ParseElement{Clause: arg, Value: &value})
}
func (p *ParseContext) matchedCmd(cmd *CmdClause) {
p.Elements = append(p.Elements, &ParseElement{Clause: cmd})
p.mergeFlags(cmd.flagGroup)
p.mergeArgs(cmd.argGroup)
p.SelectedCommand = cmd
}
// Expand arguments from a file. Lines starting with # will be treated as comments.
func ExpandArgsFromFile(filename string) (out []string, err error) {
if filename == "" {
return nil, fmt.Errorf("expected @ file to expand arguments from")
}
r, err := os.Open(filename)
if err != nil {
return nil, fmt.Errorf("failed to open arguments file %q: %s", filename, err)
}
defer r.Close()
scanner := bufio.NewScanner(r)
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, "#") || strings.TrimSpace(line) == "" {
continue
}
out = append(out, line)
}
err = scanner.Err()
if err != nil {
return nil, fmt.Errorf("failed to read arguments from %q: %s", filename, err)
}
return
}
func parse(context *ParseContext, app *Application) (err error) {
context.mergeFlags(app.flagGroup)
context.mergeArgs(app.argGroup)
cmds := app.cmdGroup
ignoreDefault := context.ignoreDefault
loop:
for !context.EOL() && !context.Error() {
token := context.Peek()
switch token.Type {
case TokenLong, TokenShort:
if flag, err := context.flags.parse(context); err != nil {
if !ignoreDefault {
if cmd := cmds.defaultSubcommand(); cmd != nil {
cmd.completionAlts = cmds.cmdNames()
context.matchedCmd(cmd)
cmds = cmd.cmdGroup
break
}
}
return err
} else if flag == HelpFlag {
ignoreDefault = true
}
case TokenArg:
if cmds.have() {
selectedDefault := false
cmd, ok := cmds.commands[token.String()]
if !ok {
if !ignoreDefault {
if cmd = cmds.defaultSubcommand(); cmd != nil {
cmd.completionAlts = cmds.cmdNames()
selectedDefault = true
}
}
if cmd == nil {
return fmt.Errorf("expected command but got %q", token)
}
}
if cmd == HelpCommand {
ignoreDefault = true
}
cmd.completionAlts = nil
context.matchedCmd(cmd)
cmds = cmd.cmdGroup
if !selectedDefault {
context.Next()
}
} else if context.arguments.have() {
if app.noInterspersed {
// no more flags
context.argsOnly = true
}
arg := context.nextArg()
if arg == nil {
break loop
}
context.matchedArg(arg, token.String())
context.Next()
} else {
break loop
}
case TokenEOL:
break loop
}
}
// Move to innermost default command.
for !ignoreDefault {
if cmd := cmds.defaultSubcommand(); cmd != nil {
cmd.completionAlts = cmds.cmdNames()
context.matchedCmd(cmd)
cmds = cmd.cmdGroup
} else {
break
}
}
if context.Error() {
return fmt.Errorf("%s", context.Peek().Value)
}
if !context.EOL() {
return fmt.Errorf("unexpected %s", context.Peek())
}
// Set defaults for all remaining args.
for arg := context.nextArg(); arg != nil && !arg.consumesRemainder(); arg = context.nextArg() {
for _, defaultValue := range arg.defaultValues {
if err := arg.value.Set(defaultValue); err != nil {
return fmt.Errorf("invalid default value '%s' for argument '%s'", defaultValue, arg.name)
}
}
}
return
}

216
vendor/github.com/alecthomas/kingpin/v2/parsers.go generated vendored Normal file
View file

@ -0,0 +1,216 @@
package kingpin
import (
"net"
"net/url"
"os"
"time"
"github.com/alecthomas/units"
)
type Settings interface {
SetValue(value Value)
}
type parserMixin struct {
value Value
required bool
}
func (p *parserMixin) SetText(text Text) {
p.value = &wrapText{text}
}
func (p *parserMixin) SetValue(value Value) {
p.value = value
}
// StringMap provides key=value parsing into a map.
func (p *parserMixin) StringMap() (target *map[string]string) {
target = &(map[string]string{})
p.StringMapVar(target)
return
}
// Duration sets the parser to a time.Duration parser.
func (p *parserMixin) Duration() (target *time.Duration) {
target = new(time.Duration)
p.DurationVar(target)
return
}
// Bytes parses numeric byte units. eg. 1.5KB
func (p *parserMixin) Bytes() (target *units.Base2Bytes) {
target = new(units.Base2Bytes)
p.BytesVar(target)
return
}
// IP sets the parser to a net.IP parser.
func (p *parserMixin) IP() (target *net.IP) {
target = new(net.IP)
p.IPVar(target)
return
}
// TCP (host:port) address.
func (p *parserMixin) TCP() (target **net.TCPAddr) {
target = new(*net.TCPAddr)
p.TCPVar(target)
return
}
// TCPVar (host:port) address.
func (p *parserMixin) TCPVar(target **net.TCPAddr) {
p.SetValue(newTCPAddrValue(target))
}
// ExistingFile sets the parser to one that requires and returns an existing file.
func (p *parserMixin) ExistingFile() (target *string) {
target = new(string)
p.ExistingFileVar(target)
return
}
// ExistingDir sets the parser to one that requires and returns an existing directory.
func (p *parserMixin) ExistingDir() (target *string) {
target = new(string)
p.ExistingDirVar(target)
return
}
// ExistingFileOrDir sets the parser to one that requires and returns an existing file OR directory.
func (p *parserMixin) ExistingFileOrDir() (target *string) {
target = new(string)
p.ExistingFileOrDirVar(target)
return
}
// File returns an os.File against an existing file.
func (p *parserMixin) File() (target **os.File) {
target = new(*os.File)
p.FileVar(target)
return
}
// File attempts to open a File with os.OpenFile(flag, perm).
func (p *parserMixin) OpenFile(flag int, perm os.FileMode) (target **os.File) {
target = new(*os.File)
p.OpenFileVar(target, flag, perm)
return
}
// URL provides a valid, parsed url.URL.
func (p *parserMixin) URL() (target **url.URL) {
target = new(*url.URL)
p.URLVar(target)
return
}
// StringMap provides key=value parsing into a map.
func (p *parserMixin) StringMapVar(target *map[string]string) {
p.SetValue(newStringMapValue(target))
}
// Float sets the parser to a float64 parser.
func (p *parserMixin) Float() (target *float64) {
return p.Float64()
}
// Float sets the parser to a float64 parser.
func (p *parserMixin) FloatVar(target *float64) {
p.Float64Var(target)
}
// Duration sets the parser to a time.Duration parser.
func (p *parserMixin) DurationVar(target *time.Duration) {
p.SetValue(newDurationValue(target))
}
// BytesVar parses numeric byte units. eg. 1.5KB
func (p *parserMixin) BytesVar(target *units.Base2Bytes) {
p.SetValue(newBytesValue(target))
}
// IP sets the parser to a net.IP parser.
func (p *parserMixin) IPVar(target *net.IP) {
p.SetValue(newIPValue(target))
}
// ExistingFile sets the parser to one that requires and returns an existing file.
func (p *parserMixin) ExistingFileVar(target *string) {
p.SetValue(newExistingFileValue(target))
}
// ExistingDir sets the parser to one that requires and returns an existing directory.
func (p *parserMixin) ExistingDirVar(target *string) {
p.SetValue(newExistingDirValue(target))
}
// ExistingDir sets the parser to one that requires and returns an existing directory.
func (p *parserMixin) ExistingFileOrDirVar(target *string) {
p.SetValue(newExistingFileOrDirValue(target))
}
// FileVar opens an existing file.
func (p *parserMixin) FileVar(target **os.File) {
p.SetValue(newFileValue(target, os.O_RDONLY, 0))
}
// OpenFileVar calls os.OpenFile(flag, perm)
func (p *parserMixin) OpenFileVar(target **os.File, flag int, perm os.FileMode) {
p.SetValue(newFileValue(target, flag, perm))
}
// URL provides a valid, parsed url.URL.
func (p *parserMixin) URLVar(target **url.URL) {
p.SetValue(newURLValue(target))
}
// URLList provides a parsed list of url.URL values.
func (p *parserMixin) URLList() (target *[]*url.URL) {
target = new([]*url.URL)
p.URLListVar(target)
return
}
// URLListVar provides a parsed list of url.URL values.
func (p *parserMixin) URLListVar(target *[]*url.URL) {
p.SetValue(newURLListValue(target))
}
// Enum allows a value from a set of options.
func (p *parserMixin) Enum(options ...string) (target *string) {
target = new(string)
p.EnumVar(target, options...)
return
}
// EnumVar allows a value from a set of options.
func (p *parserMixin) EnumVar(target *string, options ...string) {
p.SetValue(newEnumFlag(target, options...))
}
// Enums allows a set of values from a set of options.
func (p *parserMixin) Enums(options ...string) (target *[]string) {
target = new([]string)
p.EnumsVar(target, options...)
return
}
// EnumVar allows a value from a set of options.
func (p *parserMixin) EnumsVar(target *[]string, options ...string) {
p.SetValue(newEnumsFlag(target, options...))
}
// A Counter increments a number each time it is encountered.
func (p *parserMixin) Counter() (target *int) {
target = new(int)
p.CounterVar(target)
return
}
func (p *parserMixin) CounterVar(target *int) {
p.SetValue(newCounterValue(target))
}

262
vendor/github.com/alecthomas/kingpin/v2/templates.go generated vendored Normal file
View file

@ -0,0 +1,262 @@
package kingpin
// Default usage template.
var DefaultUsageTemplate = `{{define "FormatCommand" -}}
{{if .FlagSummary}} {{.FlagSummary}}{{end -}}
{{range .Args}}{{if not .Hidden}} {{if not .Required}}[{{end}}{{if .PlaceHolder}}{{.PlaceHolder}}{{else}}<{{.Name}}>{{end}}{{if .Value|IsCumulative}}...{{end}}{{if not .Required}}]{{end}}{{end}}{{end -}}
{{end -}}
{{define "FormatCommands" -}}
{{range .FlattenedCommands -}}
{{if not .Hidden -}}
{{.FullCommand}}{{if .Default}}*{{end}}{{template "FormatCommand" .}}
{{.Help|Wrap 4}}
{{end -}}
{{end -}}
{{end -}}
{{define "FormatUsage" -}}
{{template "FormatCommand" .}}{{if .Commands}} <command> [<args> ...]{{end}}
{{if .Help}}
{{.Help|Wrap 0 -}}
{{end -}}
{{end -}}
{{if .Context.SelectedCommand -}}
usage: {{.App.Name}} {{.Context.SelectedCommand}}{{template "FormatUsage" .Context.SelectedCommand}}
{{ else -}}
usage: {{.App.Name}}{{template "FormatUsage" .App}}
{{end}}
{{if .Context.Flags -}}
Flags:
{{.Context.Flags|FlagsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.Args -}}
Args:
{{.Context.Args|ArgsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.SelectedCommand -}}
{{if len .Context.SelectedCommand.Commands -}}
Subcommands:
{{template "FormatCommands" .Context.SelectedCommand}}
{{end -}}
{{else if .App.Commands -}}
Commands:
{{template "FormatCommands" .App}}
{{end -}}
`
// Usage template where command's optional flags are listed separately
var SeparateOptionalFlagsUsageTemplate = `{{define "FormatCommand" -}}
{{if .FlagSummary}} {{.FlagSummary}}{{end -}}
{{range .Args}}{{if not .Hidden}} {{if not .Required}}[{{end}}{{if .PlaceHolder}}{{.PlaceHolder}}{{else}}<{{.Name}}>{{end}}{{if .Value|IsCumulative}}...{{end}}{{if not .Required}}]{{end}}{{end}}{{end -}}
{{end -}}
{{define "FormatCommands" -}}
{{range .FlattenedCommands -}}
{{if not .Hidden -}}
{{.FullCommand}}{{if .Default}}*{{end}}{{template "FormatCommand" .}}
{{.Help|Wrap 4}}
{{end -}}
{{end -}}
{{end -}}
{{define "FormatUsage" -}}
{{template "FormatCommand" .}}{{if .Commands}} <command> [<args> ...]{{end}}
{{if .Help}}
{{.Help|Wrap 0 -}}
{{end -}}
{{end -}}
{{if .Context.SelectedCommand -}}
usage: {{.App.Name}} {{.Context.SelectedCommand}}{{template "FormatUsage" .Context.SelectedCommand}}
{{else -}}
usage: {{.App.Name}}{{template "FormatUsage" .App}}
{{end -}}
{{if .Context.Flags|RequiredFlags -}}
Required flags:
{{.Context.Flags|RequiredFlags|FlagsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.Flags|OptionalFlags -}}
Optional flags:
{{.Context.Flags|OptionalFlags|FlagsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.Args -}}
Args:
{{.Context.Args|ArgsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.SelectedCommand -}}
Subcommands:
{{if .Context.SelectedCommand.Commands -}}
{{template "FormatCommands" .Context.SelectedCommand}}
{{end -}}
{{else if .App.Commands -}}
Commands:
{{template "FormatCommands" .App}}
{{end -}}
`
// Usage template with compactly formatted commands.
var CompactUsageTemplate = `{{define "FormatCommand" -}}
{{if .FlagSummary}} {{.FlagSummary}}{{end -}}
{{range .Args}}{{if not .Hidden}} {{if not .Required}}[{{end}}{{if .PlaceHolder}}{{.PlaceHolder}}{{else}}<{{.Name}}>{{end}}{{if .Value|IsCumulative}}...{{end}}{{if not .Required}}]{{end}}{{end}}{{end -}}
{{end -}}
{{define "FormatCommandList" -}}
{{range . -}}
{{if not .Hidden -}}
{{.Depth|Indent}}{{.Name}}{{if .Default}}*{{end}}{{template "FormatCommand" .}}
{{end -}}
{{template "FormatCommandList" .Commands -}}
{{end -}}
{{end -}}
{{define "FormatUsage" -}}
{{template "FormatCommand" .}}{{if .Commands}} <command> [<args> ...]{{end}}
{{if .Help}}
{{.Help|Wrap 0 -}}
{{end -}}
{{end -}}
{{if .Context.SelectedCommand -}}
usage: {{.App.Name}} {{.Context.SelectedCommand}}{{template "FormatUsage" .Context.SelectedCommand}}
{{else -}}
usage: {{.App.Name}}{{template "FormatUsage" .App}}
{{end -}}
{{if .Context.Flags -}}
Flags:
{{.Context.Flags|FlagsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.Args -}}
Args:
{{.Context.Args|ArgsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.SelectedCommand -}}
{{if .Context.SelectedCommand.Commands -}}
Commands:
{{.Context.SelectedCommand}}
{{template "FormatCommandList" .Context.SelectedCommand.Commands}}
{{end -}}
{{else if .App.Commands -}}
Commands:
{{template "FormatCommandList" .App.Commands}}
{{end -}}
`
var ManPageTemplate = `{{define "FormatFlags" -}}
{{range .Flags -}}
{{if not .Hidden -}}
.TP
\fB{{if .Short}}-{{.Short|Char}}, {{end}}--{{.Name}}{{if not .IsBoolFlag}}={{.FormatPlaceHolder}}{{end -}}\fR
{{.Help}}
{{end -}}
{{end -}}
{{end -}}
{{define "FormatCommand" -}}
{{if .FlagSummary}} {{.FlagSummary}}{{end -}}
{{range .Args}}{{if not .Hidden}} {{if not .Required}}[{{end}}{{if .PlaceHolder}}{{.PlaceHolder}}{{else}}<{{.Name}}>{{end}}{{if .Value|IsCumulative}}...{{end}}{{if not .Required}}]{{end}}{{end}}{{end -}}
{{end -}}
{{define "FormatCommands" -}}
{{range .FlattenedCommands -}}
{{if not .Hidden -}}
.SS
\fB{{.FullCommand}}{{template "FormatCommand" . -}}\fR
.PP
{{.Help}}
{{template "FormatFlags" . -}}
{{end -}}
{{end -}}
{{end -}}
{{define "FormatUsage" -}}
{{template "FormatCommand" .}}{{if .Commands}} <command> [<args> ...]{{end -}}\fR
{{end -}}
.TH {{.App.Name}} 1 {{.App.Version}} "{{.App.Author}}"
.SH "NAME"
{{.App.Name}}
.SH "SYNOPSIS"
.TP
\fB{{.App.Name}}{{template "FormatUsage" .App}}
.SH "DESCRIPTION"
{{.App.Help}}
.SH "OPTIONS"
{{template "FormatFlags" .App -}}
{{if .App.Commands -}}
.SH "COMMANDS"
{{template "FormatCommands" .App -}}
{{end -}}
`
// Default usage template.
var LongHelpTemplate = `{{define "FormatCommand" -}}
{{if .FlagSummary}} {{.FlagSummary}}{{end -}}
{{range .Args}}{{if not .Hidden}} {{if not .Required}}[{{end}}{{if .PlaceHolder}}{{.PlaceHolder}}{{else}}<{{.Name}}>{{end}}{{if .Value|IsCumulative}}...{{end}}{{if not .Required}}]{{end}}{{end}}{{end -}}
{{end -}}
{{define "FormatCommands" -}}
{{range .FlattenedCommands -}}
{{if not .Hidden -}}
{{.FullCommand}}{{template "FormatCommand" .}}
{{.Help|Wrap 4}}
{{with .Flags|FlagsToTwoColumns}}{{FormatTwoColumnsWithIndent . 4 2}}{{end}}
{{end -}}
{{end -}}
{{end -}}
{{define "FormatUsage" -}}
{{template "FormatCommand" .}}{{if .Commands}} <command> [<args> ...]{{end}}
{{if .Help}}
{{.Help|Wrap 0 -}}
{{end -}}
{{end -}}
usage: {{.App.Name}}{{template "FormatUsage" .App}}
{{if .Context.Flags -}}
Flags:
{{.Context.Flags|FlagsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .Context.Args -}}
Args:
{{.Context.Args|ArgsToTwoColumns|FormatTwoColumns}}
{{end -}}
{{if .App.Commands -}}
Commands:
{{template "FormatCommands" .App}}
{{end -}}
`
var BashCompletionTemplate = `
_{{.App.Name}}_bash_autocomplete() {
local cur prev opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
opts=$( ${COMP_WORDS[0]} --completion-bash "${COMP_WORDS[@]:1:$COMP_CWORD}" )
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
}
complete -F _{{.App.Name}}_bash_autocomplete -o default {{.App.Name}}
`
var ZshCompletionTemplate = `#compdef {{.App.Name}}
_{{.App.Name}}() {
local matches=($(${words[1]} --completion-bash "${(@)words[1,$CURRENT]}"))
compadd -a matches
if [[ $compstate[nmatches] -eq 0 && $words[$CURRENT] != -* ]]; then
_files
fi
}
if [[ "$(basename -- ${(%):-%x})" != "_{{.App.Name}}" ]]; then
compdef _{{.App.Name}} {{.App.Name}}
fi
`

225
vendor/github.com/alecthomas/kingpin/v2/usage.go generated vendored Normal file
View file

@ -0,0 +1,225 @@
package kingpin
import (
"bytes"
"fmt"
"go/doc"
"io"
"strings"
"text/template"
)
var (
preIndent = " "
)
func formatTwoColumns(w io.Writer, indent, padding, width int, rows [][2]string) {
// Find size of first column.
s := 0
for _, row := range rows {
if c := len(row[0]); c > s && c < 30 {
s = c
}
}
indentStr := strings.Repeat(" ", indent)
offsetStr := strings.Repeat(" ", s+padding)
for _, row := range rows {
buf := bytes.NewBuffer(nil)
doc.ToText(buf, row[1], "", preIndent, width-s-padding-indent)
lines := strings.Split(strings.TrimRight(buf.String(), "\n"), "\n")
fmt.Fprintf(w, "%s%-*s%*s", indentStr, s, row[0], padding, "")
if len(row[0]) >= 30 {
fmt.Fprintf(w, "\n%s%s", indentStr, offsetStr)
}
fmt.Fprintf(w, "%s\n", lines[0])
for _, line := range lines[1:] {
fmt.Fprintf(w, "%s%s%s\n", indentStr, offsetStr, line)
}
}
}
// Usage writes application usage to w. It parses args to determine
// appropriate help context, such as which command to show help for.
func (a *Application) Usage(args []string) {
context, err := a.parseContext(true, args)
a.FatalIfError(err, "")
if err := a.UsageForContextWithTemplate(context, 2, a.usageTemplate); err != nil {
panic(err)
}
}
func formatAppUsage(app *ApplicationModel) string {
s := []string{app.Name}
if len(app.Flags) > 0 {
s = append(s, app.FlagSummary())
}
if len(app.Args) > 0 {
s = append(s, app.ArgSummary())
}
return strings.Join(s, " ")
}
func formatCmdUsage(app *ApplicationModel, cmd *CmdModel) string {
s := []string{app.Name, cmd.String()}
if len(cmd.Flags) > 0 {
s = append(s, cmd.FlagSummary())
}
if len(cmd.Args) > 0 {
s = append(s, cmd.ArgSummary())
}
return strings.Join(s, " ")
}
func formatFlag(haveShort bool, flag *FlagModel) string {
flagString := ""
flagName := flag.Name
if flag.IsBoolFlag() {
flagName = "[no-]" + flagName
}
if flag.Short != 0 {
flagString += fmt.Sprintf("-%c, --%s", flag.Short, flagName)
} else {
if haveShort {
flagString += fmt.Sprintf(" --%s", flagName)
} else {
flagString += fmt.Sprintf("--%s", flagName)
}
}
if !flag.IsBoolFlag() {
flagString += fmt.Sprintf("=%s", flag.FormatPlaceHolder())
}
if v, ok := flag.Value.(repeatableFlag); ok && v.IsCumulative() {
flagString += " ..."
}
return flagString
}
type templateParseContext struct {
SelectedCommand *CmdModel
*FlagGroupModel
*ArgGroupModel
}
type templateContext struct {
App *ApplicationModel
Width int
Context *templateParseContext
}
// UsageForContext displays usage information from a ParseContext (obtained from
// Application.ParseContext() or Action(f) callbacks).
func (a *Application) UsageForContext(context *ParseContext) error {
return a.UsageForContextWithTemplate(context, 2, a.usageTemplate)
}
// UsageForContextWithTemplate is the base usage function. You generally don't need to use this.
func (a *Application) UsageForContextWithTemplate(context *ParseContext, indent int, tmpl string) error {
width := guessWidth(a.usageWriter)
funcs := template.FuncMap{
"Indent": func(level int) string {
return strings.Repeat(" ", level*indent)
},
"Wrap": func(indent int, s string) string {
buf := bytes.NewBuffer(nil)
indentText := strings.Repeat(" ", indent)
doc.ToText(buf, s, indentText, " "+indentText, width-indent)
return buf.String()
},
"FormatFlag": formatFlag,
"FlagsToTwoColumns": func(f []*FlagModel) [][2]string {
rows := [][2]string{}
haveShort := false
for _, flag := range f {
if flag.Short != 0 {
haveShort = true
break
}
}
for _, flag := range f {
if !flag.Hidden {
rows = append(rows, [2]string{formatFlag(haveShort, flag), flag.HelpWithEnvar()})
}
}
return rows
},
"RequiredFlags": func(f []*FlagModel) []*FlagModel {
requiredFlags := []*FlagModel{}
for _, flag := range f {
if flag.Required {
requiredFlags = append(requiredFlags, flag)
}
}
return requiredFlags
},
"OptionalFlags": func(f []*FlagModel) []*FlagModel {
optionalFlags := []*FlagModel{}
for _, flag := range f {
if !flag.Required {
optionalFlags = append(optionalFlags, flag)
}
}
return optionalFlags
},
"ArgsToTwoColumns": func(a []*ArgModel) [][2]string {
rows := [][2]string{}
for _, arg := range a {
if !arg.Hidden {
var s string
if arg.PlaceHolder != "" {
s = arg.PlaceHolder
} else {
s = "<" + arg.Name + ">"
}
if !arg.Required {
s = "[" + s + "]"
}
rows = append(rows, [2]string{s, arg.HelpWithEnvar()})
}
}
return rows
},
"FormatTwoColumns": func(rows [][2]string) string {
buf := bytes.NewBuffer(nil)
formatTwoColumns(buf, indent, indent, width, rows)
return buf.String()
},
"FormatTwoColumnsWithIndent": func(rows [][2]string, indent, padding int) string {
buf := bytes.NewBuffer(nil)
formatTwoColumns(buf, indent, padding, width, rows)
return buf.String()
},
"FormatAppUsage": formatAppUsage,
"FormatCommandUsage": formatCmdUsage,
"IsCumulative": func(value Value) bool {
r, ok := value.(remainderArg)
return ok && r.IsCumulative()
},
"Char": func(c rune) string {
return string(c)
},
}
for k, v := range a.usageFuncs {
funcs[k] = v
}
t, err := template.New("usage").Funcs(funcs).Parse(tmpl)
if err != nil {
return err
}
var selectedCommand *CmdModel
if context.SelectedCommand != nil {
selectedCommand = context.SelectedCommand.Model()
}
ctx := templateContext{
App: a.Model(),
Width: width,
Context: &templateParseContext{
SelectedCommand: selectedCommand,
FlagGroupModel: context.flags.Model(),
ArgGroupModel: context.arguments.Model(),
},
}
return t.Execute(a.usageWriter, ctx)
}

489
vendor/github.com/alecthomas/kingpin/v2/values.go generated vendored Normal file
View file

@ -0,0 +1,489 @@
package kingpin
//go:generate go run ./cmd/genvalues/main.go
import (
"encoding"
"fmt"
"net"
"net/url"
"os"
"reflect"
"regexp"
"strings"
"time"
"github.com/alecthomas/units"
"github.com/xhit/go-str2duration/v2"
)
// NOTE: Most of the base type values were lifted from:
// http://golang.org/src/pkg/flag/flag.go?s=20146:20222
// Value is the interface to the dynamic value stored in a flag.
// (The default value is represented as a string.)
//
// If a Value has an IsBoolFlag() bool method returning true, the command-line
// parser makes --name equivalent to -name=true rather than using the next
// command-line argument, and adds a --no-name counterpart for negating the
// flag.
type Value interface {
String() string
Set(string) error
}
// Getter is an interface that allows the contents of a Value to be retrieved.
// It wraps the Value interface, rather than being part of it, because it
// appeared after Go 1 and its compatibility rules. All Value types provided
// by this package satisfy the Getter interface.
type Getter interface {
Value
Get() interface{}
}
// Optional interface to indicate boolean flags that don't accept a value, and
// implicitly have a --no-<x> negation counterpart.
type boolFlag interface {
IsBoolFlag() bool
}
// Optional interface for arguments that cumulatively consume all remaining
// input.
type remainderArg interface {
IsCumulative() bool
}
// Optional interface for flags that can be repeated.
type repeatableFlag interface {
IsCumulative() bool
}
// Text is the interface to the dynamic value stored in a flag.
// (The default value is represented as a string.)
type Text interface {
encoding.TextMarshaler
encoding.TextUnmarshaler
}
type wrapText struct {
text Text
}
func (w wrapText) String() string {
buf, _ := w.text.MarshalText()
return string(buf)
}
func (w *wrapText) Set(s string) error {
return w.text.UnmarshalText([]byte(s))
}
type accumulator struct {
element func(value interface{}) Value
typ reflect.Type
slice reflect.Value
}
// Use reflection to accumulate values into a slice.
//
// target := []string{}
// newAccumulator(&target, func (value interface{}) Value {
// return newStringValue(value.(*string))
// })
func newAccumulator(slice interface{}, element func(value interface{}) Value) *accumulator {
typ := reflect.TypeOf(slice)
if typ.Kind() != reflect.Ptr || typ.Elem().Kind() != reflect.Slice {
panic("expected a pointer to a slice")
}
return &accumulator{
element: element,
typ: typ.Elem().Elem(),
slice: reflect.ValueOf(slice),
}
}
func (a *accumulator) String() string {
out := []string{}
s := a.slice.Elem()
for i := 0; i < s.Len(); i++ {
out = append(out, a.element(s.Index(i).Addr().Interface()).String())
}
return strings.Join(out, ",")
}
func (a *accumulator) Set(value string) error {
e := reflect.New(a.typ)
if err := a.element(e.Interface()).Set(value); err != nil {
return err
}
slice := reflect.Append(a.slice.Elem(), e.Elem())
a.slice.Elem().Set(slice)
return nil
}
func (a *accumulator) Get() interface{} {
return a.slice.Interface()
}
func (a *accumulator) IsCumulative() bool {
return true
}
func (b *boolValue) IsBoolFlag() bool { return true }
// -- time.Duration Value
type durationValue time.Duration
func newDurationValue(p *time.Duration) *durationValue {
return (*durationValue)(p)
}
func (d *durationValue) Set(s string) error {
v, err := str2duration.ParseDuration(s)
*d = durationValue(v)
return err
}
func (d *durationValue) Get() interface{} { return time.Duration(*d) }
func (d *durationValue) String() string { return (*time.Duration)(d).String() }
// -- map[string]string Value
type stringMapValue map[string]string
func newStringMapValue(p *map[string]string) *stringMapValue {
return (*stringMapValue)(p)
}
var stringMapRegex = regexp.MustCompile("[:=]")
func (s *stringMapValue) Set(value string) error {
parts := stringMapRegex.Split(value, 2)
if len(parts) != 2 {
return fmt.Errorf("expected KEY=VALUE got '%s'", value)
}
(*s)[parts[0]] = parts[1]
return nil
}
func (s *stringMapValue) Get() interface{} {
return (map[string]string)(*s)
}
func (s *stringMapValue) String() string {
return fmt.Sprintf("%s", map[string]string(*s))
}
func (s *stringMapValue) IsCumulative() bool {
return true
}
// -- net.IP Value
type ipValue net.IP
func newIPValue(p *net.IP) *ipValue {
return (*ipValue)(p)
}
func (i *ipValue) Set(value string) error {
if ip := net.ParseIP(value); ip == nil {
return fmt.Errorf("'%s' is not an IP address", value)
} else {
*i = *(*ipValue)(&ip)
return nil
}
}
func (i *ipValue) Get() interface{} {
return (net.IP)(*i)
}
func (i *ipValue) String() string {
return (*net.IP)(i).String()
}
// -- *net.TCPAddr Value
type tcpAddrValue struct {
addr **net.TCPAddr
}
func newTCPAddrValue(p **net.TCPAddr) *tcpAddrValue {
return &tcpAddrValue{p}
}
func (i *tcpAddrValue) Set(value string) error {
if addr, err := net.ResolveTCPAddr("tcp", value); err != nil {
return fmt.Errorf("'%s' is not a valid TCP address: %s", value, err)
} else {
*i.addr = addr
return nil
}
}
func (t *tcpAddrValue) Get() interface{} {
return (*net.TCPAddr)(*t.addr)
}
func (i *tcpAddrValue) String() string {
return (*i.addr).String()
}
// -- existingFile Value
type fileStatValue struct {
path *string
predicate func(os.FileInfo) error
}
func newFileStatValue(p *string, predicate func(os.FileInfo) error) *fileStatValue {
return &fileStatValue{
path: p,
predicate: predicate,
}
}
func (e *fileStatValue) Set(value string) error {
if s, err := os.Stat(value); os.IsNotExist(err) {
return fmt.Errorf("path '%s' does not exist", value)
} else if err != nil {
return err
} else if err := e.predicate(s); err != nil {
return err
}
*e.path = value
return nil
}
func (f *fileStatValue) Get() interface{} {
return (string)(*f.path)
}
func (e *fileStatValue) String() string {
return *e.path
}
// -- os.File value
type fileValue struct {
f **os.File
flag int
perm os.FileMode
}
func newFileValue(p **os.File, flag int, perm os.FileMode) *fileValue {
return &fileValue{p, flag, perm}
}
func (f *fileValue) Set(value string) error {
if fd, err := os.OpenFile(value, f.flag, f.perm); err != nil {
return err
} else {
*f.f = fd
return nil
}
}
func (f *fileValue) Get() interface{} {
return (*os.File)(*f.f)
}
func (f *fileValue) String() string {
if *f.f == nil {
return "<nil>"
}
return (*f.f).Name()
}
// -- url.URL Value
type urlValue struct {
u **url.URL
}
func newURLValue(p **url.URL) *urlValue {
return &urlValue{p}
}
func (u *urlValue) Set(value string) error {
if url, err := url.Parse(value); err != nil {
return fmt.Errorf("invalid URL: %s", err)
} else {
*u.u = url
return nil
}
}
func (u *urlValue) Get() interface{} {
return (*url.URL)(*u.u)
}
func (u *urlValue) String() string {
if *u.u == nil {
return "<nil>"
}
return (*u.u).String()
}
// -- []*url.URL Value
type urlListValue []*url.URL
func newURLListValue(p *[]*url.URL) *urlListValue {
return (*urlListValue)(p)
}
func (u *urlListValue) Set(value string) error {
if url, err := url.Parse(value); err != nil {
return fmt.Errorf("invalid URL: %s", err)
} else {
*u = append(*u, url)
return nil
}
}
func (u *urlListValue) Get() interface{} {
return ([]*url.URL)(*u)
}
func (u *urlListValue) String() string {
out := []string{}
for _, url := range *u {
out = append(out, url.String())
}
return strings.Join(out, ",")
}
func (u *urlListValue) IsCumulative() bool {
return true
}
// A flag whose value must be in a set of options.
type enumValue struct {
value *string
options []string
}
func newEnumFlag(target *string, options ...string) *enumValue {
return &enumValue{
value: target,
options: options,
}
}
func (a *enumValue) String() string {
return *a.value
}
func (a *enumValue) Set(value string) error {
for _, v := range a.options {
if v == value {
*a.value = value
return nil
}
}
return fmt.Errorf("enum value must be one of %s, got '%s'", strings.Join(a.options, ","), value)
}
func (e *enumValue) Get() interface{} {
return (string)(*e.value)
}
// -- []string Enum Value
type enumsValue struct {
value *[]string
options []string
}
func newEnumsFlag(target *[]string, options ...string) *enumsValue {
return &enumsValue{
value: target,
options: options,
}
}
func (s *enumsValue) Set(value string) error {
for _, v := range s.options {
if v == value {
*s.value = append(*s.value, value)
return nil
}
}
return fmt.Errorf("enum value must be one of %s, got '%s'", strings.Join(s.options, ","), value)
}
func (e *enumsValue) Get() interface{} {
return ([]string)(*e.value)
}
func (s *enumsValue) String() string {
return strings.Join(*s.value, ",")
}
func (s *enumsValue) IsCumulative() bool {
return true
}
// -- units.Base2Bytes Value
type bytesValue units.Base2Bytes
func newBytesValue(p *units.Base2Bytes) *bytesValue {
return (*bytesValue)(p)
}
func (d *bytesValue) Set(s string) error {
v, err := units.ParseBase2Bytes(s)
*d = bytesValue(v)
return err
}
func (d *bytesValue) Get() interface{} { return units.Base2Bytes(*d) }
func (d *bytesValue) String() string { return (*units.Base2Bytes)(d).String() }
func newExistingFileValue(target *string) *fileStatValue {
return newFileStatValue(target, func(s os.FileInfo) error {
if s.IsDir() {
return fmt.Errorf("'%s' is a directory", s.Name())
}
return nil
})
}
func newExistingDirValue(target *string) *fileStatValue {
return newFileStatValue(target, func(s os.FileInfo) error {
if !s.IsDir() {
return fmt.Errorf("'%s' is a file", s.Name())
}
return nil
})
}
func newExistingFileOrDirValue(target *string) *fileStatValue {
return newFileStatValue(target, func(s os.FileInfo) error { return nil })
}
type counterValue int
func newCounterValue(n *int) *counterValue {
return (*counterValue)(n)
}
func (c *counterValue) Set(s string) error {
*c++
return nil
}
func (c *counterValue) Get() interface{} { return (int)(*c) }
func (c *counterValue) IsBoolFlag() bool { return true }
func (c *counterValue) String() string { return fmt.Sprintf("%d", *c) }
func (c *counterValue) IsCumulative() bool { return true }
func resolveHost(value string) (net.IP, error) {
if ip := net.ParseIP(value); ip != nil {
return ip, nil
} else {
if addr, err := net.ResolveIPAddr("ip", value); err != nil {
return nil, err
} else {
return addr.IP, nil
}
}
}

25
vendor/github.com/alecthomas/kingpin/v2/values.json generated vendored Normal file
View file

@ -0,0 +1,25 @@
[
{"type": "bool", "parser": "strconv.ParseBool(s)"},
{"type": "string", "parser": "s, error(nil)", "format": "string(*f.v)", "plural": "Strings"},
{"type": "uint", "parser": "strconv.ParseUint(s, 0, 64)", "plural": "Uints"},
{"type": "uint8", "parser": "strconv.ParseUint(s, 0, 8)"},
{"type": "uint16", "parser": "strconv.ParseUint(s, 0, 16)"},
{"type": "uint32", "parser": "strconv.ParseUint(s, 0, 32)"},
{"type": "uint64", "parser": "strconv.ParseUint(s, 0, 64)"},
{"type": "int", "parser": "strconv.ParseFloat(s, 64)", "plural": "Ints"},
{"type": "int8", "parser": "strconv.ParseInt(s, 0, 8)"},
{"type": "int16", "parser": "strconv.ParseInt(s, 0, 16)"},
{"type": "int32", "parser": "strconv.ParseInt(s, 0, 32)"},
{"type": "int64", "parser": "strconv.ParseInt(s, 0, 64)"},
{"type": "float64", "parser": "strconv.ParseFloat(s, 64)"},
{"type": "float32", "parser": "strconv.ParseFloat(s, 32)"},
{"name": "Duration", "type": "time.Duration", "no_value_parser": true},
{"name": "IP", "type": "net.IP", "no_value_parser": true},
{"name": "TCPAddr", "Type": "*net.TCPAddr", "plural": "TCPList", "no_value_parser": true},
{"name": "ExistingFile", "Type": "string", "plural": "ExistingFiles", "no_value_parser": true},
{"name": "ExistingDir", "Type": "string", "plural": "ExistingDirs", "no_value_parser": true},
{"name": "ExistingFileOrDir", "Type": "string", "plural": "ExistingFilesOrDirs", "no_value_parser": true},
{"name": "Regexp", "Type": "*regexp.Regexp", "parser": "regexp.Compile(s)"},
{"name": "ResolvedIP", "Type": "net.IP", "parser": "resolveHost(s)", "help": "Resolve a hostname or IP to an IP."},
{"name": "HexBytes", "Type": "[]byte", "parser": "hex.DecodeString(s)", "help": "Bytes as a hex string."}
]

View file

@ -0,0 +1,821 @@
package kingpin
import (
"encoding/hex"
"fmt"
"net"
"regexp"
"strconv"
"time"
)
// This file is autogenerated by "go generate .". Do not modify.
// -- bool Value
type boolValue struct{ v *bool }
func newBoolValue(p *bool) *boolValue {
return &boolValue{p}
}
func (f *boolValue) Set(s string) error {
v, err := strconv.ParseBool(s)
if err == nil {
*f.v = (bool)(v)
}
return err
}
func (f *boolValue) Get() interface{} { return (bool)(*f.v) }
func (f *boolValue) String() string { return fmt.Sprintf("%v", *f.v) }
// Bool parses the next command-line value as bool.
func (p *parserMixin) Bool() (target *bool) {
target = new(bool)
p.BoolVar(target)
return
}
func (p *parserMixin) BoolVar(target *bool) {
p.SetValue(newBoolValue(target))
}
// BoolList accumulates bool values into a slice.
func (p *parserMixin) BoolList() (target *[]bool) {
target = new([]bool)
p.BoolListVar(target)
return
}
func (p *parserMixin) BoolListVar(target *[]bool) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newBoolValue(v.(*bool))
}))
}
// -- string Value
type stringValue struct{ v *string }
func newStringValue(p *string) *stringValue {
return &stringValue{p}
}
func (f *stringValue) Set(s string) error {
v, err := s, error(nil)
if err == nil {
*f.v = (string)(v)
}
return err
}
func (f *stringValue) Get() interface{} { return (string)(*f.v) }
func (f *stringValue) String() string { return string(*f.v) }
// String parses the next command-line value as string.
func (p *parserMixin) String() (target *string) {
target = new(string)
p.StringVar(target)
return
}
func (p *parserMixin) StringVar(target *string) {
p.SetValue(newStringValue(target))
}
// Strings accumulates string values into a slice.
func (p *parserMixin) Strings() (target *[]string) {
target = new([]string)
p.StringsVar(target)
return
}
func (p *parserMixin) StringsVar(target *[]string) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newStringValue(v.(*string))
}))
}
// -- uint Value
type uintValue struct{ v *uint }
func newUintValue(p *uint) *uintValue {
return &uintValue{p}
}
func (f *uintValue) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 64)
if err == nil {
*f.v = (uint)(v)
}
return err
}
func (f *uintValue) Get() interface{} { return (uint)(*f.v) }
func (f *uintValue) String() string { return fmt.Sprintf("%v", *f.v) }
// Uint parses the next command-line value as uint.
func (p *parserMixin) Uint() (target *uint) {
target = new(uint)
p.UintVar(target)
return
}
func (p *parserMixin) UintVar(target *uint) {
p.SetValue(newUintValue(target))
}
// Uints accumulates uint values into a slice.
func (p *parserMixin) Uints() (target *[]uint) {
target = new([]uint)
p.UintsVar(target)
return
}
func (p *parserMixin) UintsVar(target *[]uint) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newUintValue(v.(*uint))
}))
}
// -- uint8 Value
type uint8Value struct{ v *uint8 }
func newUint8Value(p *uint8) *uint8Value {
return &uint8Value{p}
}
func (f *uint8Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 8)
if err == nil {
*f.v = (uint8)(v)
}
return err
}
func (f *uint8Value) Get() interface{} { return (uint8)(*f.v) }
func (f *uint8Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Uint8 parses the next command-line value as uint8.
func (p *parserMixin) Uint8() (target *uint8) {
target = new(uint8)
p.Uint8Var(target)
return
}
func (p *parserMixin) Uint8Var(target *uint8) {
p.SetValue(newUint8Value(target))
}
// Uint8List accumulates uint8 values into a slice.
func (p *parserMixin) Uint8List() (target *[]uint8) {
target = new([]uint8)
p.Uint8ListVar(target)
return
}
func (p *parserMixin) Uint8ListVar(target *[]uint8) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newUint8Value(v.(*uint8))
}))
}
// -- uint16 Value
type uint16Value struct{ v *uint16 }
func newUint16Value(p *uint16) *uint16Value {
return &uint16Value{p}
}
func (f *uint16Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 16)
if err == nil {
*f.v = (uint16)(v)
}
return err
}
func (f *uint16Value) Get() interface{} { return (uint16)(*f.v) }
func (f *uint16Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Uint16 parses the next command-line value as uint16.
func (p *parserMixin) Uint16() (target *uint16) {
target = new(uint16)
p.Uint16Var(target)
return
}
func (p *parserMixin) Uint16Var(target *uint16) {
p.SetValue(newUint16Value(target))
}
// Uint16List accumulates uint16 values into a slice.
func (p *parserMixin) Uint16List() (target *[]uint16) {
target = new([]uint16)
p.Uint16ListVar(target)
return
}
func (p *parserMixin) Uint16ListVar(target *[]uint16) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newUint16Value(v.(*uint16))
}))
}
// -- uint32 Value
type uint32Value struct{ v *uint32 }
func newUint32Value(p *uint32) *uint32Value {
return &uint32Value{p}
}
func (f *uint32Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 32)
if err == nil {
*f.v = (uint32)(v)
}
return err
}
func (f *uint32Value) Get() interface{} { return (uint32)(*f.v) }
func (f *uint32Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Uint32 parses the next command-line value as uint32.
func (p *parserMixin) Uint32() (target *uint32) {
target = new(uint32)
p.Uint32Var(target)
return
}
func (p *parserMixin) Uint32Var(target *uint32) {
p.SetValue(newUint32Value(target))
}
// Uint32List accumulates uint32 values into a slice.
func (p *parserMixin) Uint32List() (target *[]uint32) {
target = new([]uint32)
p.Uint32ListVar(target)
return
}
func (p *parserMixin) Uint32ListVar(target *[]uint32) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newUint32Value(v.(*uint32))
}))
}
// -- uint64 Value
type uint64Value struct{ v *uint64 }
func newUint64Value(p *uint64) *uint64Value {
return &uint64Value{p}
}
func (f *uint64Value) Set(s string) error {
v, err := strconv.ParseUint(s, 0, 64)
if err == nil {
*f.v = (uint64)(v)
}
return err
}
func (f *uint64Value) Get() interface{} { return (uint64)(*f.v) }
func (f *uint64Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Uint64 parses the next command-line value as uint64.
func (p *parserMixin) Uint64() (target *uint64) {
target = new(uint64)
p.Uint64Var(target)
return
}
func (p *parserMixin) Uint64Var(target *uint64) {
p.SetValue(newUint64Value(target))
}
// Uint64List accumulates uint64 values into a slice.
func (p *parserMixin) Uint64List() (target *[]uint64) {
target = new([]uint64)
p.Uint64ListVar(target)
return
}
func (p *parserMixin) Uint64ListVar(target *[]uint64) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newUint64Value(v.(*uint64))
}))
}
// -- int Value
type intValue struct{ v *int }
func newIntValue(p *int) *intValue {
return &intValue{p}
}
func (f *intValue) Set(s string) error {
v, err := strconv.ParseFloat(s, 64)
if err == nil {
*f.v = (int)(v)
}
return err
}
func (f *intValue) Get() interface{} { return (int)(*f.v) }
func (f *intValue) String() string { return fmt.Sprintf("%v", *f.v) }
// Int parses the next command-line value as int.
func (p *parserMixin) Int() (target *int) {
target = new(int)
p.IntVar(target)
return
}
func (p *parserMixin) IntVar(target *int) {
p.SetValue(newIntValue(target))
}
// Ints accumulates int values into a slice.
func (p *parserMixin) Ints() (target *[]int) {
target = new([]int)
p.IntsVar(target)
return
}
func (p *parserMixin) IntsVar(target *[]int) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newIntValue(v.(*int))
}))
}
// -- int8 Value
type int8Value struct{ v *int8 }
func newInt8Value(p *int8) *int8Value {
return &int8Value{p}
}
func (f *int8Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 8)
if err == nil {
*f.v = (int8)(v)
}
return err
}
func (f *int8Value) Get() interface{} { return (int8)(*f.v) }
func (f *int8Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Int8 parses the next command-line value as int8.
func (p *parserMixin) Int8() (target *int8) {
target = new(int8)
p.Int8Var(target)
return
}
func (p *parserMixin) Int8Var(target *int8) {
p.SetValue(newInt8Value(target))
}
// Int8List accumulates int8 values into a slice.
func (p *parserMixin) Int8List() (target *[]int8) {
target = new([]int8)
p.Int8ListVar(target)
return
}
func (p *parserMixin) Int8ListVar(target *[]int8) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newInt8Value(v.(*int8))
}))
}
// -- int16 Value
type int16Value struct{ v *int16 }
func newInt16Value(p *int16) *int16Value {
return &int16Value{p}
}
func (f *int16Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 16)
if err == nil {
*f.v = (int16)(v)
}
return err
}
func (f *int16Value) Get() interface{} { return (int16)(*f.v) }
func (f *int16Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Int16 parses the next command-line value as int16.
func (p *parserMixin) Int16() (target *int16) {
target = new(int16)
p.Int16Var(target)
return
}
func (p *parserMixin) Int16Var(target *int16) {
p.SetValue(newInt16Value(target))
}
// Int16List accumulates int16 values into a slice.
func (p *parserMixin) Int16List() (target *[]int16) {
target = new([]int16)
p.Int16ListVar(target)
return
}
func (p *parserMixin) Int16ListVar(target *[]int16) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newInt16Value(v.(*int16))
}))
}
// -- int32 Value
type int32Value struct{ v *int32 }
func newInt32Value(p *int32) *int32Value {
return &int32Value{p}
}
func (f *int32Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 32)
if err == nil {
*f.v = (int32)(v)
}
return err
}
func (f *int32Value) Get() interface{} { return (int32)(*f.v) }
func (f *int32Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Int32 parses the next command-line value as int32.
func (p *parserMixin) Int32() (target *int32) {
target = new(int32)
p.Int32Var(target)
return
}
func (p *parserMixin) Int32Var(target *int32) {
p.SetValue(newInt32Value(target))
}
// Int32List accumulates int32 values into a slice.
func (p *parserMixin) Int32List() (target *[]int32) {
target = new([]int32)
p.Int32ListVar(target)
return
}
func (p *parserMixin) Int32ListVar(target *[]int32) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newInt32Value(v.(*int32))
}))
}
// -- int64 Value
type int64Value struct{ v *int64 }
func newInt64Value(p *int64) *int64Value {
return &int64Value{p}
}
func (f *int64Value) Set(s string) error {
v, err := strconv.ParseInt(s, 0, 64)
if err == nil {
*f.v = (int64)(v)
}
return err
}
func (f *int64Value) Get() interface{} { return (int64)(*f.v) }
func (f *int64Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Int64 parses the next command-line value as int64.
func (p *parserMixin) Int64() (target *int64) {
target = new(int64)
p.Int64Var(target)
return
}
func (p *parserMixin) Int64Var(target *int64) {
p.SetValue(newInt64Value(target))
}
// Int64List accumulates int64 values into a slice.
func (p *parserMixin) Int64List() (target *[]int64) {
target = new([]int64)
p.Int64ListVar(target)
return
}
func (p *parserMixin) Int64ListVar(target *[]int64) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newInt64Value(v.(*int64))
}))
}
// -- float64 Value
type float64Value struct{ v *float64 }
func newFloat64Value(p *float64) *float64Value {
return &float64Value{p}
}
func (f *float64Value) Set(s string) error {
v, err := strconv.ParseFloat(s, 64)
if err == nil {
*f.v = (float64)(v)
}
return err
}
func (f *float64Value) Get() interface{} { return (float64)(*f.v) }
func (f *float64Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Float64 parses the next command-line value as float64.
func (p *parserMixin) Float64() (target *float64) {
target = new(float64)
p.Float64Var(target)
return
}
func (p *parserMixin) Float64Var(target *float64) {
p.SetValue(newFloat64Value(target))
}
// Float64List accumulates float64 values into a slice.
func (p *parserMixin) Float64List() (target *[]float64) {
target = new([]float64)
p.Float64ListVar(target)
return
}
func (p *parserMixin) Float64ListVar(target *[]float64) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newFloat64Value(v.(*float64))
}))
}
// -- float32 Value
type float32Value struct{ v *float32 }
func newFloat32Value(p *float32) *float32Value {
return &float32Value{p}
}
func (f *float32Value) Set(s string) error {
v, err := strconv.ParseFloat(s, 32)
if err == nil {
*f.v = (float32)(v)
}
return err
}
func (f *float32Value) Get() interface{} { return (float32)(*f.v) }
func (f *float32Value) String() string { return fmt.Sprintf("%v", *f.v) }
// Float32 parses the next command-line value as float32.
func (p *parserMixin) Float32() (target *float32) {
target = new(float32)
p.Float32Var(target)
return
}
func (p *parserMixin) Float32Var(target *float32) {
p.SetValue(newFloat32Value(target))
}
// Float32List accumulates float32 values into a slice.
func (p *parserMixin) Float32List() (target *[]float32) {
target = new([]float32)
p.Float32ListVar(target)
return
}
func (p *parserMixin) Float32ListVar(target *[]float32) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newFloat32Value(v.(*float32))
}))
}
// DurationList accumulates time.Duration values into a slice.
func (p *parserMixin) DurationList() (target *[]time.Duration) {
target = new([]time.Duration)
p.DurationListVar(target)
return
}
func (p *parserMixin) DurationListVar(target *[]time.Duration) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newDurationValue(v.(*time.Duration))
}))
}
// IPList accumulates net.IP values into a slice.
func (p *parserMixin) IPList() (target *[]net.IP) {
target = new([]net.IP)
p.IPListVar(target)
return
}
func (p *parserMixin) IPListVar(target *[]net.IP) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newIPValue(v.(*net.IP))
}))
}
// TCPList accumulates *net.TCPAddr values into a slice.
func (p *parserMixin) TCPList() (target *[]*net.TCPAddr) {
target = new([]*net.TCPAddr)
p.TCPListVar(target)
return
}
func (p *parserMixin) TCPListVar(target *[]*net.TCPAddr) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newTCPAddrValue(v.(**net.TCPAddr))
}))
}
// ExistingFiles accumulates string values into a slice.
func (p *parserMixin) ExistingFiles() (target *[]string) {
target = new([]string)
p.ExistingFilesVar(target)
return
}
func (p *parserMixin) ExistingFilesVar(target *[]string) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newExistingFileValue(v.(*string))
}))
}
// ExistingDirs accumulates string values into a slice.
func (p *parserMixin) ExistingDirs() (target *[]string) {
target = new([]string)
p.ExistingDirsVar(target)
return
}
func (p *parserMixin) ExistingDirsVar(target *[]string) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newExistingDirValue(v.(*string))
}))
}
// ExistingFilesOrDirs accumulates string values into a slice.
func (p *parserMixin) ExistingFilesOrDirs() (target *[]string) {
target = new([]string)
p.ExistingFilesOrDirsVar(target)
return
}
func (p *parserMixin) ExistingFilesOrDirsVar(target *[]string) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newExistingFileOrDirValue(v.(*string))
}))
}
// -- *regexp.Regexp Value
type regexpValue struct{ v **regexp.Regexp }
func newRegexpValue(p **regexp.Regexp) *regexpValue {
return &regexpValue{p}
}
func (f *regexpValue) Set(s string) error {
v, err := regexp.Compile(s)
if err == nil {
*f.v = (*regexp.Regexp)(v)
}
return err
}
func (f *regexpValue) Get() interface{} { return (*regexp.Regexp)(*f.v) }
func (f *regexpValue) String() string { return fmt.Sprintf("%v", *f.v) }
// Regexp parses the next command-line value as *regexp.Regexp.
func (p *parserMixin) Regexp() (target **regexp.Regexp) {
target = new(*regexp.Regexp)
p.RegexpVar(target)
return
}
func (p *parserMixin) RegexpVar(target **regexp.Regexp) {
p.SetValue(newRegexpValue(target))
}
// RegexpList accumulates *regexp.Regexp values into a slice.
func (p *parserMixin) RegexpList() (target *[]*regexp.Regexp) {
target = new([]*regexp.Regexp)
p.RegexpListVar(target)
return
}
func (p *parserMixin) RegexpListVar(target *[]*regexp.Regexp) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newRegexpValue(v.(**regexp.Regexp))
}))
}
// -- net.IP Value
type resolvedIPValue struct{ v *net.IP }
func newResolvedIPValue(p *net.IP) *resolvedIPValue {
return &resolvedIPValue{p}
}
func (f *resolvedIPValue) Set(s string) error {
v, err := resolveHost(s)
if err == nil {
*f.v = (net.IP)(v)
}
return err
}
func (f *resolvedIPValue) Get() interface{} { return (net.IP)(*f.v) }
func (f *resolvedIPValue) String() string { return fmt.Sprintf("%v", *f.v) }
// Resolve a hostname or IP to an IP.
func (p *parserMixin) ResolvedIP() (target *net.IP) {
target = new(net.IP)
p.ResolvedIPVar(target)
return
}
func (p *parserMixin) ResolvedIPVar(target *net.IP) {
p.SetValue(newResolvedIPValue(target))
}
// ResolvedIPList accumulates net.IP values into a slice.
func (p *parserMixin) ResolvedIPList() (target *[]net.IP) {
target = new([]net.IP)
p.ResolvedIPListVar(target)
return
}
func (p *parserMixin) ResolvedIPListVar(target *[]net.IP) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newResolvedIPValue(v.(*net.IP))
}))
}
// -- []byte Value
type hexBytesValue struct{ v *[]byte }
func newHexBytesValue(p *[]byte) *hexBytesValue {
return &hexBytesValue{p}
}
func (f *hexBytesValue) Set(s string) error {
v, err := hex.DecodeString(s)
if err == nil {
*f.v = ([]byte)(v)
}
return err
}
func (f *hexBytesValue) Get() interface{} { return ([]byte)(*f.v) }
func (f *hexBytesValue) String() string { return fmt.Sprintf("%v", *f.v) }
// Bytes as a hex string.
func (p *parserMixin) HexBytes() (target *[]byte) {
target = new([]byte)
p.HexBytesVar(target)
return
}
func (p *parserMixin) HexBytesVar(target *[]byte) {
p.SetValue(newHexBytesValue(target))
}
// HexBytesList accumulates []byte values into a slice.
func (p *parserMixin) HexBytesList() (target *[][]byte) {
target = new([][]byte)
p.HexBytesListVar(target)
return
}
func (p *parserMixin) HexBytesListVar(target *[][]byte) {
p.SetValue(newAccumulator(target, func(v interface{}) Value {
return newHexBytesValue(v.(*[]byte))
}))
}

19
vendor/github.com/alecthomas/units/COPYING generated vendored Normal file
View file

@ -0,0 +1,19 @@
Copyright (C) 2014 Alec Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

13
vendor/github.com/alecthomas/units/README.md generated vendored Normal file
View file

@ -0,0 +1,13 @@
[![Go Reference](https://pkg.go.dev/badge/github.com/alecthomas/units.svg)](https://pkg.go.dev/github.com/alecthomas/units)
# Units - Helpful unit multipliers and functions for Go
The goal of this package is to have functionality similar to the [time](http://golang.org/pkg/time/) package.
It allows for code like this:
```go
n, err := ParseBase2Bytes("1KB")
// n == 1024
n = units.Mebibyte * 512
```

209
vendor/github.com/alecthomas/units/bytes.go generated vendored Normal file
View file

@ -0,0 +1,209 @@
package units
// Base2Bytes is the old non-SI power-of-2 byte scale (1024 bytes in a kilobyte,
// etc.).
type Base2Bytes int64
// Base-2 byte units.
const (
Kibibyte Base2Bytes = 1024
KiB = Kibibyte
Mebibyte = Kibibyte * 1024
MiB = Mebibyte
Gibibyte = Mebibyte * 1024
GiB = Gibibyte
Tebibyte = Gibibyte * 1024
TiB = Tebibyte
Pebibyte = Tebibyte * 1024
PiB = Pebibyte
Exbibyte = Pebibyte * 1024
EiB = Exbibyte
)
var (
bytesUnitMap = MakeUnitMap("iB", "B", 1024)
oldBytesUnitMap = MakeUnitMap("B", "B", 1024)
)
// ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB
// and KiB are both 1024.
// However "kB", which is the correct SI spelling of 1000 Bytes, is rejected.
func ParseBase2Bytes(s string) (Base2Bytes, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
n, err = ParseUnit(s, oldBytesUnitMap)
}
return Base2Bytes(n), err
}
func (b Base2Bytes) String() string {
return ToString(int64(b), 1024, "iB", "B")
}
// MarshalText implement encoding.TextMarshaler to process json/yaml.
func (b Base2Bytes) MarshalText() ([]byte, error) {
return []byte(b.String()), nil
}
// UnmarshalText implement encoding.TextUnmarshaler to process json/yaml.
func (b *Base2Bytes) UnmarshalText(text []byte) error {
n, err := ParseBase2Bytes(string(text))
*b = n
return err
}
// Floor returns Base2Bytes with all but the largest unit zeroed out. So that e.g. 1GiB1MiB1KiB → 1GiB.
func (b Base2Bytes) Floor() Base2Bytes {
switch {
case b > Exbibyte:
return (b / Exbibyte) * Exbibyte
case b > Pebibyte:
return (b / Pebibyte) * Pebibyte
case b > Tebibyte:
return (b / Tebibyte) * Tebibyte
case b > Gibibyte:
return (b / Gibibyte) * Gibibyte
case b > Mebibyte:
return (b / Mebibyte) * Mebibyte
case b > Kibibyte:
return (b / Kibibyte) * Kibibyte
default:
return b
}
}
// Round returns Base2Bytes with all but the first n units zeroed out. So that e.g. 1GiB1MiB1KiB → 1GiB1MiB, if n is 2.
func (b Base2Bytes) Round(n int) Base2Bytes {
idx := 0
switch {
case b > Exbibyte:
idx = n
case b > Pebibyte:
idx = n + 1
case b > Tebibyte:
idx = n + 2
case b > Gibibyte:
idx = n + 3
case b > Mebibyte:
idx = n + 4
case b > Kibibyte:
idx = n + 5
}
switch idx {
case 1:
return b - b%Exbibyte
case 2:
return b - b%Pebibyte
case 3:
return b - b%Tebibyte
case 4:
return b - b%Gibibyte
case 5:
return b - b%Mebibyte
case 6:
return b - b%Kibibyte
default:
return b
}
}
var metricBytesUnitMap = MakeUnitMap("B", "B", 1000)
// MetricBytes are SI byte units (1000 bytes in a kilobyte).
type MetricBytes SI
// SI base-10 byte units.
const (
Kilobyte MetricBytes = 1000
KB = Kilobyte
Megabyte = Kilobyte * 1000
MB = Megabyte
Gigabyte = Megabyte * 1000
GB = Gigabyte
Terabyte = Gigabyte * 1000
TB = Terabyte
Petabyte = Terabyte * 1000
PB = Petabyte
Exabyte = Petabyte * 1000
EB = Exabyte
)
// ParseMetricBytes parses base-10 metric byte units. That is, KB is 1000 bytes.
func ParseMetricBytes(s string) (MetricBytes, error) {
n, err := ParseUnit(s, metricBytesUnitMap)
return MetricBytes(n), err
}
// TODO: represents 1000B as uppercase "KB", while SI standard requires "kB".
func (m MetricBytes) String() string {
return ToString(int64(m), 1000, "B", "B")
}
// Floor returns MetricBytes with all but the largest unit zeroed out. So that e.g. 1GB1MB1KB → 1GB.
func (b MetricBytes) Floor() MetricBytes {
switch {
case b > Exabyte:
return (b / Exabyte) * Exabyte
case b > Petabyte:
return (b / Petabyte) * Petabyte
case b > Terabyte:
return (b / Terabyte) * Terabyte
case b > Gigabyte:
return (b / Gigabyte) * Gigabyte
case b > Megabyte:
return (b / Megabyte) * Megabyte
case b > Kilobyte:
return (b / Kilobyte) * Kilobyte
default:
return b
}
}
// Round returns MetricBytes with all but the first n units zeroed out. So that e.g. 1GB1MB1KB → 1GB1MB, if n is 2.
func (b MetricBytes) Round(n int) MetricBytes {
idx := 0
switch {
case b > Exabyte:
idx = n
case b > Petabyte:
idx = n + 1
case b > Terabyte:
idx = n + 2
case b > Gigabyte:
idx = n + 3
case b > Megabyte:
idx = n + 4
case b > Kilobyte:
idx = n + 5
}
switch idx {
case 1:
return b - b%Exabyte
case 2:
return b - b%Petabyte
case 3:
return b - b%Terabyte
case 4:
return b - b%Gigabyte
case 5:
return b - b%Megabyte
case 6:
return b - b%Kilobyte
default:
return b
}
}
// ParseStrictBytes supports both iB and B suffixes for base 2 and metric,
// respectively. That is, KiB represents 1024 and kB, KB represent 1000.
func ParseStrictBytes(s string) (int64, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
n, err = ParseUnit(s, metricBytesUnitMap)
}
return int64(n), err
}

13
vendor/github.com/alecthomas/units/doc.go generated vendored Normal file
View file

@ -0,0 +1,13 @@
// Package units provides helpful unit multipliers and functions for Go.
//
// The goal of this package is to have functionality similar to the time [1] package.
//
//
// [1] http://golang.org/pkg/time/
//
// It allows for code like this:
//
// n, err := ParseBase2Bytes("1KB")
// // n == 1024
// n = units.Mebibyte * 512
package units

50
vendor/github.com/alecthomas/units/si.go generated vendored Normal file
View file

@ -0,0 +1,50 @@
package units
// SI units.
type SI int64
// SI unit multiples.
const (
Kilo SI = 1000
Mega = Kilo * 1000
Giga = Mega * 1000
Tera = Giga * 1000
Peta = Tera * 1000
Exa = Peta * 1000
)
func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 {
res := map[string]float64{
shortSuffix: 1,
// see below for "k" / "K"
"M" + suffix: float64(scale * scale),
"G" + suffix: float64(scale * scale * scale),
"T" + suffix: float64(scale * scale * scale * scale),
"P" + suffix: float64(scale * scale * scale * scale * scale),
"E" + suffix: float64(scale * scale * scale * scale * scale * scale),
}
// Standard SI prefixes use lowercase "k" for kilo = 1000.
// For compatibility, and to be fool-proof, we accept both "k" and "K" in metric mode.
//
// However, official binary prefixes are always capitalized - "KiB" -
// and we specifically never parse "kB" as 1024B because:
//
// (1) people pedantic enough to use lowercase according to SI unlikely to abuse "k" to mean 1024 :-)
//
// (2) Use of capital K for 1024 was an informal tradition predating IEC prefixes:
// "The binary meaning of the kilobyte for 1024 bytes typically uses the symbol KB, with an
// uppercase letter K."
// -- https://en.wikipedia.org/wiki/Kilobyte#Base_2_(1024_bytes)
// "Capitalization of the letter K became the de facto standard for binary notation, although this
// could not be extended to higher powers, and use of the lowercase k did persist.[13][14][15]"
// -- https://en.wikipedia.org/wiki/Binary_prefix#History
// See also the extensive https://en.wikipedia.org/wiki/Timeline_of_binary_prefixes.
if scale == 1024 {
res["K"+suffix] = float64(scale)
} else {
res["k"+suffix] = float64(scale)
res["K"+suffix] = float64(scale)
}
return res
}

138
vendor/github.com/alecthomas/units/util.go generated vendored Normal file
View file

@ -0,0 +1,138 @@
package units
import (
"errors"
"fmt"
"strings"
)
var (
siUnits = []string{"", "K", "M", "G", "T", "P", "E"}
)
func ToString(n int64, scale int64, suffix, baseSuffix string) string {
mn := len(siUnits)
out := make([]string, mn)
for i, m := range siUnits {
if n%scale != 0 || i == 0 && n == 0 {
s := suffix
if i == 0 {
s = baseSuffix
}
out[mn-1-i] = fmt.Sprintf("%d%s%s", n%scale, m, s)
}
n /= scale
if n == 0 {
break
}
}
return strings.Join(out, "")
}
// Below code ripped straight from http://golang.org/src/pkg/time/format.go?s=33392:33438#L1123
var errLeadingInt = errors.New("units: bad [0-9]*") // never printed
// leadingInt consumes the leading [0-9]* from s.
func leadingInt(s string) (x int64, rem string, err error) {
i := 0
for ; i < len(s); i++ {
c := s[i]
if c < '0' || c > '9' {
break
}
if x >= (1<<63-10)/10 {
// overflow
return 0, "", errLeadingInt
}
x = x*10 + int64(c) - '0'
}
return x, s[i:], nil
}
func ParseUnit(s string, unitMap map[string]float64) (int64, error) {
// [-+]?([0-9]*(\.[0-9]*)?[a-z]+)+
orig := s
f := float64(0)
neg := false
// Consume [-+]?
if s != "" {
c := s[0]
if c == '-' || c == '+' {
neg = c == '-'
s = s[1:]
}
}
// Special case: if all that is left is "0", this is zero.
if s == "0" {
return 0, nil
}
if s == "" {
return 0, errors.New("units: invalid " + orig)
}
for s != "" {
g := float64(0) // this element of the sequence
var x int64
var err error
// The next character must be [0-9.]
if !(s[0] == '.' || ('0' <= s[0] && s[0] <= '9')) {
return 0, errors.New("units: invalid " + orig)
}
// Consume [0-9]*
pl := len(s)
x, s, err = leadingInt(s)
if err != nil {
return 0, errors.New("units: invalid " + orig)
}
g = float64(x)
pre := pl != len(s) // whether we consumed anything before a period
// Consume (\.[0-9]*)?
post := false
if s != "" && s[0] == '.' {
s = s[1:]
pl := len(s)
x, s, err = leadingInt(s)
if err != nil {
return 0, errors.New("units: invalid " + orig)
}
scale := 1.0
for n := pl - len(s); n > 0; n-- {
scale *= 10
}
g += float64(x) / scale
post = pl != len(s)
}
if !pre && !post {
// no digits (e.g. ".s" or "-.s")
return 0, errors.New("units: invalid " + orig)
}
// Consume unit.
i := 0
for ; i < len(s); i++ {
c := s[i]
if c == '.' || ('0' <= c && c <= '9') {
break
}
}
u := s[:i]
s = s[i:]
unit, ok := unitMap[u]
if !ok {
return 0, errors.New("units: unknown unit " + u + " in " + orig)
}
f += g * unit
}
if neg {
f = -f
}
if f < float64(-1<<63) || f > float64(1<<63-1) {
return 0, errors.New("units: overflow parsing unit")
}
return int64(f), nil
}

20
vendor/github.com/beorn7/perks/LICENSE generated vendored Normal file
View file

@ -0,0 +1,20 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt generated vendored Normal file

File diff suppressed because it is too large Load diff

316
vendor/github.com/beorn7/perks/quantile/stream.go generated vendored Normal file
View file

@ -0,0 +1,316 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for _, t := range targets {
if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile
} else {
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
type target struct {
quantile float64
epsilon float64
}
func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap {
t := target{
quantile: quantile,
epsilon: epsilon,
}
targets = append(targets, t)
}
return targets
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

22
vendor/github.com/cespare/xxhash/v2/LICENSE.txt generated vendored Normal file
View file

@ -0,0 +1,22 @@
Copyright (c) 2016 Caleb Spare
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

72
vendor/github.com/cespare/xxhash/v2/README.md generated vendored Normal file
View file

@ -0,0 +1,72 @@
# xxhash
[![Go Reference](https://pkg.go.dev/badge/github.com/cespare/xxhash/v2.svg)](https://pkg.go.dev/github.com/cespare/xxhash/v2)
[![Test](https://github.com/cespare/xxhash/actions/workflows/test.yml/badge.svg)](https://github.com/cespare/xxhash/actions/workflows/test.yml)
xxhash is a Go implementation of the 64-bit [xxHash] algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
This package provides a straightforward API:
```
func Sum64(b []byte) uint64
func Sum64String(s string) uint64
type Digest struct{ ... }
func New() *Digest
```
The `Digest` type implements hash.Hash64. Its key methods are:
```
func (*Digest) Write([]byte) (int, error)
func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
The package is written with optimized pure Go and also contains even faster
assembly implementations for amd64 and arm64. If desired, the `purego` build tag
opts into using the Go code even on those architectures.
[xxHash]: http://cyan4973.github.io/xxHash/
## Compatibility
This package is in a module and the latest code is in version 2 of the module.
You need a version of Go with at least "minimal module compatibility" to use
github.com/cespare/xxhash/v2:
* 1.9.7+ for Go 1.9
* 1.10.3+ for Go 1.10
* Go 1.11 or later
I recommend using the latest release of Go.
## Benchmarks
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
| input size | purego | asm |
| ---------- | --------- | --------- |
| 4 B | 1.3 GB/s | 1.2 GB/s |
| 16 B | 2.9 GB/s | 3.5 GB/s |
| 100 B | 6.9 GB/s | 8.1 GB/s |
| 4 KB | 11.7 GB/s | 16.7 GB/s |
| 10 MB | 12.0 GB/s | 17.3 GB/s |
These numbers were generated on Ubuntu 20.04 with an Intel Xeon Platinum 8252C
CPU using the following commands under Go 1.19.2:
```
benchstat <(go test -tags purego -benchtime 500ms -count 15 -bench 'Sum64$')
benchstat <(go test -benchtime 500ms -count 15 -bench 'Sum64$')
```
## Projects using this package
- [InfluxDB](https://github.com/influxdata/influxdb)
- [Prometheus](https://github.com/prometheus/prometheus)
- [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
- [FreeCache](https://github.com/coocood/freecache)
- [FastCache](https://github.com/VictoriaMetrics/fastcache)

10
vendor/github.com/cespare/xxhash/v2/testall.sh generated vendored Normal file
View file

@ -0,0 +1,10 @@
#!/bin/bash
set -eu -o pipefail
# Small convenience script for running the tests with various combinations of
# arch/tags. This assumes we're running on amd64 and have qemu available.
go test ./...
go test -tags purego ./...
GOARCH=arm64 go test
GOARCH=arm64 go test -tags purego

228
vendor/github.com/cespare/xxhash/v2/xxhash.go generated vendored Normal file
View file

@ -0,0 +1,228 @@
// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described
// at http://cyan4973.github.io/xxHash/.
package xxhash
import (
"encoding/binary"
"errors"
"math/bits"
)
const (
prime1 uint64 = 11400714785074694791
prime2 uint64 = 14029467366897019727
prime3 uint64 = 1609587929392839161
prime4 uint64 = 9650029242287828579
prime5 uint64 = 2870177450012600261
)
// Store the primes in an array as well.
//
// The consts are used when possible in Go code to avoid MOVs but we need a
// contiguous array of the assembly code.
var primes = [...]uint64{prime1, prime2, prime3, prime4, prime5}
// Digest implements hash.Hash64.
type Digest struct {
v1 uint64
v2 uint64
v3 uint64
v4 uint64
total uint64
mem [32]byte
n int // how much of mem is used
}
// New creates a new Digest that computes the 64-bit xxHash algorithm.
func New() *Digest {
var d Digest
d.Reset()
return &d
}
// Reset clears the Digest's state so that it can be reused.
func (d *Digest) Reset() {
d.v1 = primes[0] + prime2
d.v2 = prime2
d.v3 = 0
d.v4 = -primes[0]
d.total = 0
d.n = 0
}
// Size always returns 8 bytes.
func (d *Digest) Size() int { return 8 }
// BlockSize always returns 32 bytes.
func (d *Digest) BlockSize() int { return 32 }
// Write adds more data to d. It always returns len(b), nil.
func (d *Digest) Write(b []byte) (n int, err error) {
n = len(b)
d.total += uint64(n)
memleft := d.mem[d.n&(len(d.mem)-1):]
if d.n+n < 32 {
// This new data doesn't even fill the current block.
copy(memleft, b)
d.n += n
return
}
if d.n > 0 {
// Finish off the partial block.
c := copy(memleft, b)
d.v1 = round(d.v1, u64(d.mem[0:8]))
d.v2 = round(d.v2, u64(d.mem[8:16]))
d.v3 = round(d.v3, u64(d.mem[16:24]))
d.v4 = round(d.v4, u64(d.mem[24:32]))
b = b[c:]
d.n = 0
}
if len(b) >= 32 {
// One or more full blocks left.
nw := writeBlocks(d, b)
b = b[nw:]
}
// Store any remaining partial block.
copy(d.mem[:], b)
d.n = len(b)
return
}
// Sum appends the current hash to b and returns the resulting slice.
func (d *Digest) Sum(b []byte) []byte {
s := d.Sum64()
return append(
b,
byte(s>>56),
byte(s>>48),
byte(s>>40),
byte(s>>32),
byte(s>>24),
byte(s>>16),
byte(s>>8),
byte(s),
)
}
// Sum64 returns the current hash.
func (d *Digest) Sum64() uint64 {
var h uint64
if d.total >= 32 {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = d.v3 + prime5
}
h += d.total
b := d.mem[:d.n&(len(d.mem)-1)]
for ; len(b) >= 8; b = b[8:] {
k1 := round(0, u64(b[:8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if len(b) >= 4 {
h ^= uint64(u32(b[:4])) * prime1
h = rol23(h)*prime2 + prime3
b = b[4:]
}
for ; len(b) > 0; b = b[1:] {
h ^= uint64(b[0]) * prime5
h = rol11(h) * prime1
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
const (
magic = "xxh\x06"
marshaledSize = len(magic) + 8*5 + 32
)
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (d *Digest) MarshalBinary() ([]byte, error) {
b := make([]byte, 0, marshaledSize)
b = append(b, magic...)
b = appendUint64(b, d.v1)
b = appendUint64(b, d.v2)
b = appendUint64(b, d.v3)
b = appendUint64(b, d.v4)
b = appendUint64(b, d.total)
b = append(b, d.mem[:d.n]...)
b = b[:len(b)+len(d.mem)-d.n]
return b, nil
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (d *Digest) UnmarshalBinary(b []byte) error {
if len(b) < len(magic) || string(b[:len(magic)]) != magic {
return errors.New("xxhash: invalid hash state identifier")
}
if len(b) != marshaledSize {
return errors.New("xxhash: invalid hash state size")
}
b = b[len(magic):]
b, d.v1 = consumeUint64(b)
b, d.v2 = consumeUint64(b)
b, d.v3 = consumeUint64(b)
b, d.v4 = consumeUint64(b)
b, d.total = consumeUint64(b)
copy(d.mem[:], b)
d.n = int(d.total % uint64(len(d.mem)))
return nil
}
func appendUint64(b []byte, x uint64) []byte {
var a [8]byte
binary.LittleEndian.PutUint64(a[:], x)
return append(b, a[:]...)
}
func consumeUint64(b []byte) ([]byte, uint64) {
x := u64(b)
return b[8:], x
}
func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) }
func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) }
func round(acc, input uint64) uint64 {
acc += input * prime2
acc = rol31(acc)
acc *= prime1
return acc
}
func mergeRound(acc, val uint64) uint64 {
val = round(0, val)
acc ^= val
acc = acc*prime1 + prime4
return acc
}
func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }

209
vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s generated vendored Normal file
View file

@ -0,0 +1,209 @@
//go:build !appengine && gc && !purego
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
// Registers:
#define h AX
#define d AX
#define p SI // pointer to advance through b
#define n DX
#define end BX // loop end
#define v1 R8
#define v2 R9
#define v3 R10
#define v4 R11
#define x R12
#define prime1 R13
#define prime2 R14
#define prime4 DI
#define round(acc, x) \
IMULQ prime2, x \
ADDQ x, acc \
ROLQ $31, acc \
IMULQ prime1, acc
// round0 performs the operation x = round(0, x).
#define round0(x) \
IMULQ prime2, x \
ROLQ $31, x \
IMULQ prime1, x
// mergeRound applies a merge round on the two registers acc and x.
// It assumes that prime1, prime2, and prime4 have been loaded.
#define mergeRound(acc, x) \
round0(x) \
XORQ x, acc \
IMULQ prime1, acc \
ADDQ prime4, acc
// blockLoop processes as many 32-byte blocks as possible,
// updating v1, v2, v3, and v4. It assumes that there is at least one block
// to process.
#define blockLoop() \
loop: \
MOVQ +0(p), x \
round(v1, x) \
MOVQ +8(p), x \
round(v2, x) \
MOVQ +16(p), x \
round(v3, x) \
MOVQ +24(p), x \
round(v4, x) \
ADDQ $32, p \
CMPQ p, end \
JLE loop
// func Sum64(b []byte) uint64
TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
// Load fixed primes.
MOVQ ·primes+0(SB), prime1
MOVQ ·primes+8(SB), prime2
MOVQ ·primes+24(SB), prime4
// Load slice.
MOVQ b_base+0(FP), p
MOVQ b_len+8(FP), n
LEAQ (p)(n*1), end
// The first loop limit will be len(b)-32.
SUBQ $32, end
// Check whether we have at least one block.
CMPQ n, $32
JLT noBlocks
// Set up initial state (v1, v2, v3, v4).
MOVQ prime1, v1
ADDQ prime2, v1
MOVQ prime2, v2
XORQ v3, v3
XORQ v4, v4
SUBQ prime1, v4
blockLoop()
MOVQ v1, h
ROLQ $1, h
MOVQ v2, x
ROLQ $7, x
ADDQ x, h
MOVQ v3, x
ROLQ $12, x
ADDQ x, h
MOVQ v4, x
ROLQ $18, x
ADDQ x, h
mergeRound(h, v1)
mergeRound(h, v2)
mergeRound(h, v3)
mergeRound(h, v4)
JMP afterBlocks
noBlocks:
MOVQ ·primes+32(SB), h
afterBlocks:
ADDQ n, h
ADDQ $24, end
CMPQ p, end
JG try4
loop8:
MOVQ (p), x
ADDQ $8, p
round0(x)
XORQ x, h
ROLQ $27, h
IMULQ prime1, h
ADDQ prime4, h
CMPQ p, end
JLE loop8
try4:
ADDQ $4, end
CMPQ p, end
JG try1
MOVL (p), x
ADDQ $4, p
IMULQ prime1, x
XORQ x, h
ROLQ $23, h
IMULQ prime2, h
ADDQ ·primes+16(SB), h
try1:
ADDQ $4, end
CMPQ p, end
JGE finalize
loop1:
MOVBQZX (p), x
ADDQ $1, p
IMULQ ·primes+32(SB), x
XORQ x, h
ROLQ $11, h
IMULQ prime1, h
CMPQ p, end
JL loop1
finalize:
MOVQ h, x
SHRQ $33, x
XORQ x, h
IMULQ prime2, h
MOVQ h, x
SHRQ $29, x
XORQ x, h
IMULQ ·primes+16(SB), h
MOVQ h, x
SHRQ $32, x
XORQ x, h
MOVQ h, ret+24(FP)
RET
// func writeBlocks(d *Digest, b []byte) int
TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
// Load fixed primes needed for round.
MOVQ ·primes+0(SB), prime1
MOVQ ·primes+8(SB), prime2
// Load slice.
MOVQ b_base+8(FP), p
MOVQ b_len+16(FP), n
LEAQ (p)(n*1), end
SUBQ $32, end
// Load vN from d.
MOVQ s+0(FP), d
MOVQ 0(d), v1
MOVQ 8(d), v2
MOVQ 16(d), v3
MOVQ 24(d), v4
// We don't need to check the loop condition here; this function is
// always called with at least one block of data to process.
blockLoop()
// Copy vN back to d.
MOVQ v1, 0(d)
MOVQ v2, 8(d)
MOVQ v3, 16(d)
MOVQ v4, 24(d)
// The number of bytes written is p minus the old base pointer.
SUBQ b_base+8(FP), p
MOVQ p, ret+32(FP)
RET

183
vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s generated vendored Normal file
View file

@ -0,0 +1,183 @@
//go:build !appengine && gc && !purego
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
// Registers:
#define digest R1
#define h R2 // return value
#define p R3 // input pointer
#define n R4 // input length
#define nblocks R5 // n / 32
#define prime1 R7
#define prime2 R8
#define prime3 R9
#define prime4 R10
#define prime5 R11
#define v1 R12
#define v2 R13
#define v3 R14
#define v4 R15
#define x1 R20
#define x2 R21
#define x3 R22
#define x4 R23
#define round(acc, x) \
MADD prime2, acc, x, acc \
ROR $64-31, acc \
MUL prime1, acc
// round0 performs the operation x = round(0, x).
#define round0(x) \
MUL prime2, x \
ROR $64-31, x \
MUL prime1, x
#define mergeRound(acc, x) \
round0(x) \
EOR x, acc \
MADD acc, prime4, prime1, acc
// blockLoop processes as many 32-byte blocks as possible,
// updating v1, v2, v3, and v4. It assumes that n >= 32.
#define blockLoop() \
LSR $5, n, nblocks \
PCALIGN $16 \
loop: \
LDP.P 16(p), (x1, x2) \
LDP.P 16(p), (x3, x4) \
round(v1, x1) \
round(v2, x2) \
round(v3, x3) \
round(v4, x4) \
SUB $1, nblocks \
CBNZ nblocks, loop
// func Sum64(b []byte) uint64
TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
LDP b_base+0(FP), (p, n)
LDP ·primes+0(SB), (prime1, prime2)
LDP ·primes+16(SB), (prime3, prime4)
MOVD ·primes+32(SB), prime5
CMP $32, n
CSEL LT, prime5, ZR, h // if n < 32 { h = prime5 } else { h = 0 }
BLT afterLoop
ADD prime1, prime2, v1
MOVD prime2, v2
MOVD $0, v3
NEG prime1, v4
blockLoop()
ROR $64-1, v1, x1
ROR $64-7, v2, x2
ADD x1, x2
ROR $64-12, v3, x3
ROR $64-18, v4, x4
ADD x3, x4
ADD x2, x4, h
mergeRound(h, v1)
mergeRound(h, v2)
mergeRound(h, v3)
mergeRound(h, v4)
afterLoop:
ADD n, h
TBZ $4, n, try8
LDP.P 16(p), (x1, x2)
round0(x1)
// NOTE: here and below, sequencing the EOR after the ROR (using a
// rotated register) is worth a small but measurable speedup for small
// inputs.
ROR $64-27, h
EOR x1 @> 64-27, h, h
MADD h, prime4, prime1, h
round0(x2)
ROR $64-27, h
EOR x2 @> 64-27, h, h
MADD h, prime4, prime1, h
try8:
TBZ $3, n, try4
MOVD.P 8(p), x1
round0(x1)
ROR $64-27, h
EOR x1 @> 64-27, h, h
MADD h, prime4, prime1, h
try4:
TBZ $2, n, try2
MOVWU.P 4(p), x2
MUL prime1, x2
ROR $64-23, h
EOR x2 @> 64-23, h, h
MADD h, prime3, prime2, h
try2:
TBZ $1, n, try1
MOVHU.P 2(p), x3
AND $255, x3, x1
LSR $8, x3, x2
MUL prime5, x1
ROR $64-11, h
EOR x1 @> 64-11, h, h
MUL prime1, h
MUL prime5, x2
ROR $64-11, h
EOR x2 @> 64-11, h, h
MUL prime1, h
try1:
TBZ $0, n, finalize
MOVBU (p), x4
MUL prime5, x4
ROR $64-11, h
EOR x4 @> 64-11, h, h
MUL prime1, h
finalize:
EOR h >> 33, h
MUL prime2, h
EOR h >> 29, h
MUL prime3, h
EOR h >> 32, h
MOVD h, ret+24(FP)
RET
// func writeBlocks(d *Digest, b []byte) int
TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
LDP ·primes+0(SB), (prime1, prime2)
// Load state. Assume v[1-4] are stored contiguously.
MOVD d+0(FP), digest
LDP 0(digest), (v1, v2)
LDP 16(digest), (v3, v4)
LDP b_base+8(FP), (p, n)
blockLoop()
// Store updated state.
STP (v1, v2), 0(digest)
STP (v3, v4), 16(digest)
BIC $31, n
MOVD n, ret+32(FP)
RET

15
vendor/github.com/cespare/xxhash/v2/xxhash_asm.go generated vendored Normal file
View file

@ -0,0 +1,15 @@
//go:build (amd64 || arm64) && !appengine && gc && !purego
// +build amd64 arm64
// +build !appengine
// +build gc
// +build !purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
//
//go:noescape
func Sum64(b []byte) uint64
//go:noescape
func writeBlocks(d *Digest, b []byte) int

76
vendor/github.com/cespare/xxhash/v2/xxhash_other.go generated vendored Normal file
View file

@ -0,0 +1,76 @@
//go:build (!amd64 && !arm64) || appengine || !gc || purego
// +build !amd64,!arm64 appengine !gc purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
func Sum64(b []byte) uint64 {
// A simpler version would be
// d := New()
// d.Write(b)
// return d.Sum64()
// but this is faster, particularly for small inputs.
n := len(b)
var h uint64
if n >= 32 {
v1 := primes[0] + prime2
v2 := prime2
v3 := uint64(0)
v4 := -primes[0]
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = prime5
}
h += uint64(n)
for ; len(b) >= 8; b = b[8:] {
k1 := round(0, u64(b[:8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if len(b) >= 4 {
h ^= uint64(u32(b[:4])) * prime1
h = rol23(h)*prime2 + prime3
b = b[4:]
}
for ; len(b) > 0; b = b[1:] {
h ^= uint64(b[0]) * prime5
h = rol11(h) * prime1
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
func writeBlocks(d *Digest, b []byte) int {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
n := len(b)
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4
return n - len(b)
}

16
vendor/github.com/cespare/xxhash/v2/xxhash_safe.go generated vendored Normal file
View file

@ -0,0 +1,16 @@
//go:build appengine
// +build appengine
// This file contains the safe implementations of otherwise unsafe-using code.
package xxhash
// Sum64String computes the 64-bit xxHash digest of s.
func Sum64String(s string) uint64 {
return Sum64([]byte(s))
}
// WriteString adds more data to d. It always returns len(s), nil.
func (d *Digest) WriteString(s string) (n int, err error) {
return d.Write([]byte(s))
}

58
vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go generated vendored Normal file
View file

@ -0,0 +1,58 @@
//go:build !appengine
// +build !appengine
// This file encapsulates usage of unsafe.
// xxhash_safe.go contains the safe implementations.
package xxhash
import (
"unsafe"
)
// In the future it's possible that compiler optimizations will make these
// XxxString functions unnecessary by realizing that calls such as
// Sum64([]byte(s)) don't need to copy s. See https://go.dev/issue/2205.
// If that happens, even if we keep these functions they can be replaced with
// the trivial safe code.
// NOTE: The usual way of doing an unsafe string-to-[]byte conversion is:
//
// var b []byte
// bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
// bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
// bh.Len = len(s)
// bh.Cap = len(s)
//
// Unfortunately, as of Go 1.15.3 the inliner's cost model assigns a high enough
// weight to this sequence of expressions that any function that uses it will
// not be inlined. Instead, the functions below use a different unsafe
// conversion designed to minimize the inliner weight and allow both to be
// inlined. There is also a test (TestInlining) which verifies that these are
// inlined.
//
// See https://github.com/golang/go/issues/42739 for discussion.
// Sum64String computes the 64-bit xxHash digest of s.
// It may be faster than Sum64([]byte(s)) by avoiding a copy.
func Sum64String(s string) uint64 {
b := *(*[]byte)(unsafe.Pointer(&sliceHeader{s, len(s)}))
return Sum64(b)
}
// WriteString adds more data to d. It always returns len(s), nil.
// It may be faster than Write([]byte(s)) by avoiding a copy.
func (d *Digest) WriteString(s string) (n int, err error) {
d.Write(*(*[]byte)(unsafe.Pointer(&sliceHeader{s, len(s)})))
// d.Write always returns len(s), nil.
// Ignoring the return output and returning these fixed values buys a
// savings of 6 in the inliner's cost model.
return len(s), nil
}
// sliceHeader is similar to reflect.SliceHeader, but it assumes that the layout
// of the first two words is the same as the layout of a string.
type sliceHeader struct {
s string
cap int
}

191
vendor/github.com/coreos/go-systemd/v22/LICENSE generated vendored Normal file
View file

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

5
vendor/github.com/coreos/go-systemd/v22/NOTICE generated vendored Normal file
View file

@ -0,0 +1,5 @@
CoreOS Project
Copyright 2018 CoreOS, Inc
This product includes software developed at CoreOS, Inc.
(http://www.coreos.com/).

View file

@ -0,0 +1,70 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build !windows
// +build !windows
// Package activation implements primitives for systemd socket activation.
package activation
import (
"os"
"strconv"
"strings"
"syscall"
)
const (
// listenFdsStart corresponds to `SD_LISTEN_FDS_START`.
listenFdsStart = 3
)
// Files returns a slice containing a `os.File` object for each
// file descriptor passed to this process via systemd fd-passing protocol.
//
// The order of the file descriptors is preserved in the returned slice.
// `unsetEnv` is typically set to `true` in order to avoid clashes in
// fd usage and to avoid leaking environment flags to child processes.
func Files(unsetEnv bool) []*os.File {
if unsetEnv {
defer os.Unsetenv("LISTEN_PID")
defer os.Unsetenv("LISTEN_FDS")
defer os.Unsetenv("LISTEN_FDNAMES")
}
pid, err := strconv.Atoi(os.Getenv("LISTEN_PID"))
if err != nil || pid != os.Getpid() {
return nil
}
nfds, err := strconv.Atoi(os.Getenv("LISTEN_FDS"))
if err != nil || nfds == 0 {
return nil
}
names := strings.Split(os.Getenv("LISTEN_FDNAMES"), ":")
files := make([]*os.File, 0, nfds)
for fd := listenFdsStart; fd < listenFdsStart+nfds; fd++ {
syscall.CloseOnExec(fd)
name := "LISTEN_FD_" + strconv.Itoa(fd)
offset := fd - listenFdsStart
if offset < len(names) && len(names[offset]) > 0 {
name = names[offset]
}
files = append(files, os.NewFile(uintptr(fd), name))
}
return files
}

View file

@ -0,0 +1,21 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package activation
import "os"
func Files(unsetEnv bool) []*os.File {
return nil
}

View file

@ -0,0 +1,103 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package activation
import (
"crypto/tls"
"net"
)
// Listeners returns a slice containing a net.Listener for each matching socket type
// passed to this process.
//
// The order of the file descriptors is preserved in the returned slice.
// Nil values are used to fill any gaps. For example if systemd were to return file descriptors
// corresponding with "udp, tcp, tcp", then the slice would contain {nil, net.Listener, net.Listener}
func Listeners() ([]net.Listener, error) {
files := Files(true)
listeners := make([]net.Listener, len(files))
for i, f := range files {
if pc, err := net.FileListener(f); err == nil {
listeners[i] = pc
f.Close()
}
}
return listeners, nil
}
// ListenersWithNames maps a listener name to a set of net.Listener instances.
func ListenersWithNames() (map[string][]net.Listener, error) {
files := Files(true)
listeners := map[string][]net.Listener{}
for _, f := range files {
if pc, err := net.FileListener(f); err == nil {
current, ok := listeners[f.Name()]
if !ok {
listeners[f.Name()] = []net.Listener{pc}
} else {
listeners[f.Name()] = append(current, pc)
}
f.Close()
}
}
return listeners, nil
}
// TLSListeners returns a slice containing a net.listener for each matching TCP socket type
// passed to this process.
// It uses default Listeners func and forces TCP sockets handlers to use TLS based on tlsConfig.
func TLSListeners(tlsConfig *tls.Config) ([]net.Listener, error) {
listeners, err := Listeners()
if listeners == nil || err != nil {
return nil, err
}
if tlsConfig != nil {
for i, l := range listeners {
// Activate TLS only for TCP sockets
if l.Addr().Network() == "tcp" {
listeners[i] = tls.NewListener(l, tlsConfig)
}
}
}
return listeners, err
}
// TLSListenersWithNames maps a listener name to a net.Listener with
// the associated TLS configuration.
func TLSListenersWithNames(tlsConfig *tls.Config) (map[string][]net.Listener, error) {
listeners, err := ListenersWithNames()
if listeners == nil || err != nil {
return nil, err
}
if tlsConfig != nil {
for _, ll := range listeners {
// Activate TLS only for TCP sockets
for i, l := range ll {
if l.Addr().Network() == "tcp" {
ll[i] = tls.NewListener(l, tlsConfig)
}
}
}
}
return listeners, err
}

View file

@ -0,0 +1,38 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package activation
import (
"net"
)
// PacketConns returns a slice containing a net.PacketConn for each matching socket type
// passed to this process.
//
// The order of the file descriptors is preserved in the returned slice.
// Nil values are used to fill any gaps. For example if systemd were to return file descriptors
// corresponding with "udp, tcp, udp", then the slice would contain {net.PacketConn, nil, net.PacketConn}
func PacketConns() ([]net.PacketConn, error) {
files := Files(true)
conns := make([]net.PacketConn, len(files))
for i, f := range files {
if pc, err := net.FilePacketConn(f); err == nil {
conns[i] = pc
f.Close()
}
}
return conns, nil
}

15
vendor/github.com/go-kit/log/.gitignore generated vendored Normal file
View file

@ -0,0 +1,15 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/

21
vendor/github.com/go-kit/log/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2021 Go kit
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

156
vendor/github.com/go-kit/log/README.md generated vendored Normal file
View file

@ -0,0 +1,156 @@
# package log
[![Go Reference](https://pkg.go.dev/badge/github.com/go-kit/log.svg)](https://pkg.go.dev/github.com/go-kit/log)
[![Go Report Card](https://goreportcard.com/badge/go-kit/log)](https://goreportcard.com/report/go-kit/log)
[![GitHub Actions](https://github.com/go-kit/log/actions/workflows/test.yml/badge.svg)](https://github.com/go-kit/log/actions/workflows/test.yml)
[![Coverage Status](https://coveralls.io/repos/github/go-kit/log/badge.svg?branch=main)](https://coveralls.io/github/go-kit/log?branch=main)
`package log` provides a minimal interface for structured logging in services.
It may be wrapped to encode conventions, enforce type-safety, provide leveled
logging, and so on. It can be used for both typical application log events,
and log-structured data streams.
## Structured logging
Structured logging is, basically, conceding to the reality that logs are
_data_, and warrant some level of schematic rigor. Using a stricter,
key/value-oriented message format for our logs, containing contextual and
semantic information, makes it much easier to get insight into the
operational activity of the systems we build. Consequently, `package log` is
of the strong belief that "[the benefits of structured logging outweigh the
minimal effort involved](https://www.thoughtworks.com/radar/techniques/structured-logging)".
Migrating from unstructured to structured logging is probably a lot easier
than you'd expect.
```go
// Unstructured
log.Printf("HTTP server listening on %s", addr)
// Structured
logger.Log("transport", "HTTP", "addr", addr, "msg", "listening")
```
## Usage
### Typical application logging
```go
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
logger.Log("question", "what is the meaning of life?", "answer", 42)
// Output:
// question="what is the meaning of life?" answer=42
```
### Contextual Loggers
```go
func main() {
var logger log.Logger
logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
logger = log.With(logger, "instance_id", 123)
logger.Log("msg", "starting")
NewWorker(log.With(logger, "component", "worker")).Run()
NewSlacker(log.With(logger, "component", "slacker")).Run()
}
// Output:
// instance_id=123 msg=starting
// instance_id=123 component=worker msg=running
// instance_id=123 component=slacker msg=running
```
### Interact with stdlib logger
Redirect stdlib logger to Go kit logger.
```go
import (
"os"
stdlog "log"
kitlog "github.com/go-kit/log"
)
func main() {
logger := kitlog.NewJSONLogger(kitlog.NewSyncWriter(os.Stdout))
stdlog.SetOutput(kitlog.NewStdlibAdapter(logger))
stdlog.Print("I sure like pie")
}
// Output:
// {"msg":"I sure like pie","ts":"2016/01/01 12:34:56"}
```
Or, if, for legacy reasons, you need to pipe all of your logging through the
stdlib log package, you can redirect Go kit logger to the stdlib logger.
```go
logger := kitlog.NewLogfmtLogger(kitlog.StdlibWriter{})
logger.Log("legacy", true, "msg", "at least it's something")
// Output:
// 2016/01/01 12:34:56 legacy=true msg="at least it's something"
```
### Timestamps and callers
```go
var logger log.Logger
logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller)
logger.Log("msg", "hello")
// Output:
// ts=2016-01-01T12:34:56Z caller=main.go:15 msg=hello
```
## Levels
Log levels are supported via the [level package](https://godoc.org/github.com/go-kit/log/level).
## Supported output formats
- [Logfmt](https://brandur.org/logfmt) ([see also](https://blog.codeship.com/logfmt-a-log-format-thats-easy-to-read-and-write))
- JSON
## Enhancements
`package log` is centered on the one-method Logger interface.
```go
type Logger interface {
Log(keyvals ...interface{}) error
}
```
This interface, and its supporting code like is the product of much iteration
and evaluation. For more details on the evolution of the Logger interface,
see [The Hunt for a Logger Interface](http://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1),
a talk by [Chris Hines](https://github.com/ChrisHines).
Also, please see
[#63](https://github.com/go-kit/kit/issues/63),
[#76](https://github.com/go-kit/kit/pull/76),
[#131](https://github.com/go-kit/kit/issues/131),
[#157](https://github.com/go-kit/kit/pull/157),
[#164](https://github.com/go-kit/kit/issues/164), and
[#252](https://github.com/go-kit/kit/pull/252)
to review historical conversations about package log and the Logger interface.
Value-add packages and suggestions,
like improvements to [the leveled logger](https://godoc.org/github.com/go-kit/log/level),
are of course welcome. Good proposals should
- Be composable with [contextual loggers](https://godoc.org/github.com/go-kit/log#With),
- Not break the behavior of [log.Caller](https://godoc.org/github.com/go-kit/log#Caller) in any wrapped contextual loggers, and
- Be friendly to packages that accept only an unadorned log.Logger.
## Benchmarks & comparisons
There are a few Go logging benchmarks and comparisons that include Go kit's package log.
- [imkira/go-loggers-bench](https://github.com/imkira/go-loggers-bench) includes kit/log
- [uber-common/zap](https://github.com/uber-common/zap), a zero-alloc logging library, includes a comparison with kit/log

116
vendor/github.com/go-kit/log/doc.go generated vendored Normal file
View file

@ -0,0 +1,116 @@
// Package log provides a structured logger.
//
// Structured logging produces logs easily consumed later by humans or
// machines. Humans might be interested in debugging errors, or tracing
// specific requests. Machines might be interested in counting interesting
// events, or aggregating information for off-line processing. In both cases,
// it is important that the log messages are structured and actionable.
// Package log is designed to encourage both of these best practices.
//
// Basic Usage
//
// The fundamental interface is Logger. Loggers create log events from
// key/value data. The Logger interface has a single method, Log, which
// accepts a sequence of alternating key/value pairs, which this package names
// keyvals.
//
// type Logger interface {
// Log(keyvals ...interface{}) error
// }
//
// Here is an example of a function using a Logger to create log events.
//
// func RunTask(task Task, logger log.Logger) string {
// logger.Log("taskID", task.ID, "event", "starting task")
// ...
// logger.Log("taskID", task.ID, "event", "task complete")
// }
//
// The keys in the above example are "taskID" and "event". The values are
// task.ID, "starting task", and "task complete". Every key is followed
// immediately by its value.
//
// Keys are usually plain strings. Values may be any type that has a sensible
// encoding in the chosen log format. With structured logging it is a good
// idea to log simple values without formatting them. This practice allows
// the chosen logger to encode values in the most appropriate way.
//
// Contextual Loggers
//
// A contextual logger stores keyvals that it includes in all log events.
// Building appropriate contextual loggers reduces repetition and aids
// consistency in the resulting log output. With, WithPrefix, and WithSuffix
// add context to a logger. We can use With to improve the RunTask example.
//
// func RunTask(task Task, logger log.Logger) string {
// logger = log.With(logger, "taskID", task.ID)
// logger.Log("event", "starting task")
// ...
// taskHelper(task.Cmd, logger)
// ...
// logger.Log("event", "task complete")
// }
//
// The improved version emits the same log events as the original for the
// first and last calls to Log. Passing the contextual logger to taskHelper
// enables each log event created by taskHelper to include the task.ID even
// though taskHelper does not have access to that value. Using contextual
// loggers this way simplifies producing log output that enables tracing the
// life cycle of individual tasks. (See the Contextual example for the full
// code of the above snippet.)
//
// Dynamic Contextual Values
//
// A Valuer function stored in a contextual logger generates a new value each
// time an event is logged. The Valuer example demonstrates how this feature
// works.
//
// Valuers provide the basis for consistently logging timestamps and source
// code location. The log package defines several valuers for that purpose.
// See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and
// DefaultCaller. A common logger initialization sequence that ensures all log
// entries contain a timestamp and source location looks like this:
//
// logger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
// logger = log.With(logger, "ts", log.DefaultTimestampUTC, "caller", log.DefaultCaller)
//
// Concurrent Safety
//
// Applications with multiple goroutines want each log event written to the
// same logger to remain separate from other log events. Package log provides
// two simple solutions for concurrent safe logging.
//
// NewSyncWriter wraps an io.Writer and serializes each call to its Write
// method. Using a SyncWriter has the benefit that the smallest practical
// portion of the logging logic is performed within a mutex, but it requires
// the formatting Logger to make only one call to Write per log event.
//
// NewSyncLogger wraps any Logger and serializes each call to its Log method.
// Using a SyncLogger has the benefit that it guarantees each log event is
// handled atomically within the wrapped logger, but it typically serializes
// both the formatting and output logic. Use a SyncLogger if the formatting
// logger may perform multiple writes per log event.
//
// Error Handling
//
// This package relies on the practice of wrapping or decorating loggers with
// other loggers to provide composable pieces of functionality. It also means
// that Logger.Log must return an error because some
// implementations—especially those that output log data to an io.Writer—may
// encounter errors that cannot be handled locally. This in turn means that
// Loggers that wrap other loggers should return errors from the wrapped
// logger up the stack.
//
// Fortunately, the decorator pattern also provides a way to avoid the
// necessity to check for errors every time an application calls Logger.Log.
// An application required to panic whenever its Logger encounters
// an error could initialize its logger as follows.
//
// fmtlogger := log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
// logger := log.LoggerFunc(func(keyvals ...interface{}) error {
// if err := fmtlogger.Log(keyvals...); err != nil {
// panic(err)
// }
// return nil
// })
package log

91
vendor/github.com/go-kit/log/json_logger.go generated vendored Normal file
View file

@ -0,0 +1,91 @@
package log
import (
"encoding"
"encoding/json"
"fmt"
"io"
"reflect"
)
type jsonLogger struct {
io.Writer
}
// NewJSONLogger returns a Logger that encodes keyvals to the Writer as a
// single JSON object. Each log event produces no more than one call to
// w.Write. The passed Writer must be safe for concurrent use by multiple
// goroutines if the returned Logger will be used concurrently.
func NewJSONLogger(w io.Writer) Logger {
return &jsonLogger{w}
}
func (l *jsonLogger) Log(keyvals ...interface{}) error {
n := (len(keyvals) + 1) / 2 // +1 to handle case when len is odd
m := make(map[string]interface{}, n)
for i := 0; i < len(keyvals); i += 2 {
k := keyvals[i]
var v interface{} = ErrMissingValue
if i+1 < len(keyvals) {
v = keyvals[i+1]
}
merge(m, k, v)
}
enc := json.NewEncoder(l.Writer)
enc.SetEscapeHTML(false)
return enc.Encode(m)
}
func merge(dst map[string]interface{}, k, v interface{}) {
var key string
switch x := k.(type) {
case string:
key = x
case fmt.Stringer:
key = safeString(x)
default:
key = fmt.Sprint(x)
}
// We want json.Marshaler and encoding.TextMarshaller to take priority over
// err.Error() and v.String(). But json.Marshall (called later) does that by
// default so we force a no-op if it's one of those 2 case.
switch x := v.(type) {
case json.Marshaler:
case encoding.TextMarshaler:
case error:
v = safeError(x)
case fmt.Stringer:
v = safeString(x)
}
dst[key] = v
}
func safeString(str fmt.Stringer) (s string) {
defer func() {
if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(str); v.Kind() == reflect.Ptr && v.IsNil() {
s = "NULL"
} else {
s = fmt.Sprintf("PANIC in String method: %v", panicVal)
}
}
}()
s = str.String()
return
}
func safeError(err error) (s interface{}) {
defer func() {
if panicVal := recover(); panicVal != nil {
if v := reflect.ValueOf(err); v.Kind() == reflect.Ptr && v.IsNil() {
s = nil
} else {
s = fmt.Sprintf("PANIC in Error method: %v", panicVal)
}
}
}()
s = err.Error()
return
}

33
vendor/github.com/go-kit/log/level/doc.go generated vendored Normal file
View file

@ -0,0 +1,33 @@
// Package level implements leveled logging on top of Go kit's log package. To
// use the level package, create a logger as per normal in your func main, and
// wrap it with level.NewFilter.
//
// var logger log.Logger
// logger = log.NewLogfmtLogger(os.Stderr)
// logger = level.NewFilter(logger, level.AllowInfo()) // <--
// logger = log.With(logger, "ts", log.DefaultTimestampUTC)
//
// It's also possible to configure log level from a string. For instance from
// a flag, environment variable or configuration file.
//
// fs := flag.NewFlagSet("myprogram")
// lvl := fs.String("log", "info", "debug, info, warn, error")
//
// var logger log.Logger
// logger = log.NewLogfmtLogger(os.Stderr)
// logger = level.NewFilter(logger, level.Allow(level.ParseDefault(*lvl, level.InfoValue()))) // <--
// logger = log.With(logger, "ts", log.DefaultTimestampUTC)
//
// Then, at the callsites, use one of the level.Debug, Info, Warn, or Error
// helper methods to emit leveled log events.
//
// logger.Log("foo", "bar") // as normal, no level
// level.Debug(logger).Log("request_id", reqID, "trace_data", trace.Get())
// if value > 100 {
// level.Error(logger).Log("value", value)
// }
//
// NewFilter allows precise control over what happens when a log event is
// emitted without a level key, or if a squelched level is used. Check the
// Option functions for details.
package level

256
vendor/github.com/go-kit/log/level/level.go generated vendored Normal file
View file

@ -0,0 +1,256 @@
package level
import (
"errors"
"strings"
"github.com/go-kit/log"
)
// ErrInvalidLevelString is returned whenever an invalid string is passed to Parse.
var ErrInvalidLevelString = errors.New("invalid level string")
// Error returns a logger that includes a Key/ErrorValue pair.
func Error(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), ErrorValue())
}
// Warn returns a logger that includes a Key/WarnValue pair.
func Warn(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), WarnValue())
}
// Info returns a logger that includes a Key/InfoValue pair.
func Info(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), InfoValue())
}
// Debug returns a logger that includes a Key/DebugValue pair.
func Debug(logger log.Logger) log.Logger {
return log.WithPrefix(logger, Key(), DebugValue())
}
// NewFilter wraps next and implements level filtering. See the commentary on
// the Option functions for a detailed description of how to configure levels.
// If no options are provided, all leveled log events created with Debug,
// Info, Warn or Error helper methods are squelched and non-leveled log
// events are passed to next unmodified.
func NewFilter(next log.Logger, options ...Option) log.Logger {
l := &logger{
next: next,
}
for _, option := range options {
option(l)
}
return l
}
type logger struct {
next log.Logger
allowed level
squelchNoLevel bool
errNotAllowed error
errNoLevel error
}
func (l *logger) Log(keyvals ...interface{}) error {
var hasLevel, levelAllowed bool
for i := 1; i < len(keyvals); i += 2 {
if v, ok := keyvals[i].(*levelValue); ok {
hasLevel = true
levelAllowed = l.allowed&v.level != 0
break
}
}
if !hasLevel && l.squelchNoLevel {
return l.errNoLevel
}
if hasLevel && !levelAllowed {
return l.errNotAllowed
}
return l.next.Log(keyvals...)
}
// Option sets a parameter for the leveled logger.
type Option func(*logger)
// Allow the provided log level to pass.
func Allow(v Value) Option {
switch v {
case debugValue:
return AllowDebug()
case infoValue:
return AllowInfo()
case warnValue:
return AllowWarn()
case errorValue:
return AllowError()
default:
return AllowNone()
}
}
// AllowAll is an alias for AllowDebug.
func AllowAll() Option {
return AllowDebug()
}
// AllowDebug allows error, warn, info and debug level log events to pass.
func AllowDebug() Option {
return allowed(levelError | levelWarn | levelInfo | levelDebug)
}
// AllowInfo allows error, warn and info level log events to pass.
func AllowInfo() Option {
return allowed(levelError | levelWarn | levelInfo)
}
// AllowWarn allows error and warn level log events to pass.
func AllowWarn() Option {
return allowed(levelError | levelWarn)
}
// AllowError allows only error level log events to pass.
func AllowError() Option {
return allowed(levelError)
}
// AllowNone allows no leveled log events to pass.
func AllowNone() Option {
return allowed(0)
}
func allowed(allowed level) Option {
return func(l *logger) { l.allowed = allowed }
}
// Parse a string to its corresponding level value. Valid strings are "debug",
// "info", "warn", and "error". Strings are normalized via strings.TrimSpace and
// strings.ToLower.
func Parse(level string) (Value, error) {
switch strings.TrimSpace(strings.ToLower(level)) {
case debugValue.name:
return debugValue, nil
case infoValue.name:
return infoValue, nil
case warnValue.name:
return warnValue, nil
case errorValue.name:
return errorValue, nil
default:
return nil, ErrInvalidLevelString
}
}
// ParseDefault calls Parse and returns the default Value on error.
func ParseDefault(level string, def Value) Value {
v, err := Parse(level)
if err != nil {
return def
}
return v
}
// ErrNotAllowed sets the error to return from Log when it squelches a log
// event disallowed by the configured Allow[Level] option. By default,
// ErrNotAllowed is nil; in this case the log event is squelched with no
// error.
func ErrNotAllowed(err error) Option {
return func(l *logger) { l.errNotAllowed = err }
}
// SquelchNoLevel instructs Log to squelch log events with no level, so that
// they don't proceed through to the wrapped logger. If SquelchNoLevel is set
// to true and a log event is squelched in this way, the error value
// configured with ErrNoLevel is returned to the caller.
func SquelchNoLevel(squelch bool) Option {
return func(l *logger) { l.squelchNoLevel = squelch }
}
// ErrNoLevel sets the error to return from Log when it squelches a log event
// with no level. By default, ErrNoLevel is nil; in this case the log event is
// squelched with no error.
func ErrNoLevel(err error) Option {
return func(l *logger) { l.errNoLevel = err }
}
// NewInjector wraps next and returns a logger that adds a Key/level pair to
// the beginning of log events that don't already contain a level. In effect,
// this gives a default level to logs without a level.
func NewInjector(next log.Logger, level Value) log.Logger {
return &injector{
next: next,
level: level,
}
}
type injector struct {
next log.Logger
level interface{}
}
func (l *injector) Log(keyvals ...interface{}) error {
for i := 1; i < len(keyvals); i += 2 {
if _, ok := keyvals[i].(*levelValue); ok {
return l.next.Log(keyvals...)
}
}
kvs := make([]interface{}, len(keyvals)+2)
kvs[0], kvs[1] = key, l.level
copy(kvs[2:], keyvals)
return l.next.Log(kvs...)
}
// Value is the interface that each of the canonical level values implement.
// It contains unexported methods that prevent types from other packages from
// implementing it and guaranteeing that NewFilter can distinguish the levels
// defined in this package from all other values.
type Value interface {
String() string
levelVal()
}
// Key returns the unique key added to log events by the loggers in this
// package.
func Key() interface{} { return key }
// ErrorValue returns the unique value added to log events by Error.
func ErrorValue() Value { return errorValue }
// WarnValue returns the unique value added to log events by Warn.
func WarnValue() Value { return warnValue }
// InfoValue returns the unique value added to log events by Info.
func InfoValue() Value { return infoValue }
// DebugValue returns the unique value added to log events by Debug.
func DebugValue() Value { return debugValue }
var (
// key is of type interface{} so that it allocates once during package
// initialization and avoids allocating every time the value is added to a
// []interface{} later.
key interface{} = "level"
errorValue = &levelValue{level: levelError, name: "error"}
warnValue = &levelValue{level: levelWarn, name: "warn"}
infoValue = &levelValue{level: levelInfo, name: "info"}
debugValue = &levelValue{level: levelDebug, name: "debug"}
)
type level byte
const (
levelDebug level = 1 << iota
levelInfo
levelWarn
levelError
)
type levelValue struct {
name string
level
}
func (v *levelValue) String() string { return v.name }
func (v *levelValue) levelVal() {}

179
vendor/github.com/go-kit/log/log.go generated vendored Normal file
View file

@ -0,0 +1,179 @@
package log
import "errors"
// Logger is the fundamental interface for all log operations. Log creates a
// log event from keyvals, a variadic sequence of alternating keys and values.
// Implementations must be safe for concurrent use by multiple goroutines. In
// particular, any implementation of Logger that appends to keyvals or
// modifies or retains any of its elements must make a copy first.
type Logger interface {
Log(keyvals ...interface{}) error
}
// ErrMissingValue is appended to keyvals slices with odd length to substitute
// the missing value.
var ErrMissingValue = errors.New("(MISSING)")
// With returns a new contextual logger with keyvals prepended to those passed
// to calls to Log. If logger is also a contextual logger created by With,
// WithPrefix, or WithSuffix, keyvals is appended to the existing context.
//
// The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method.
func With(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 {
return logger
}
l := newContext(logger)
kvs := append(l.keyvals, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
return &context{
logger: l.logger,
// Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract.
keyvals: kvs[:len(kvs):len(kvs)],
hasValuer: l.hasValuer || containsValuer(keyvals),
sKeyvals: l.sKeyvals,
sHasValuer: l.sHasValuer,
}
}
// WithPrefix returns a new contextual logger with keyvals prepended to those
// passed to calls to Log. If logger is also a contextual logger created by
// With, WithPrefix, or WithSuffix, keyvals is prepended to the existing context.
//
// The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method.
func WithPrefix(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 {
return logger
}
l := newContext(logger)
// Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract.
n := len(l.keyvals) + len(keyvals)
if len(keyvals)%2 != 0 {
n++
}
kvs := make([]interface{}, 0, n)
kvs = append(kvs, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
kvs = append(kvs, l.keyvals...)
return &context{
logger: l.logger,
keyvals: kvs,
hasValuer: l.hasValuer || containsValuer(keyvals),
sKeyvals: l.sKeyvals,
sHasValuer: l.sHasValuer,
}
}
// WithSuffix returns a new contextual logger with keyvals appended to those
// passed to calls to Log. If logger is also a contextual logger created by
// With, WithPrefix, or WithSuffix, keyvals is appended to the existing context.
//
// The returned Logger replaces all value elements (odd indexes) containing a
// Valuer with their generated value for each call to its Log method.
func WithSuffix(logger Logger, keyvals ...interface{}) Logger {
if len(keyvals) == 0 {
return logger
}
l := newContext(logger)
// Limiting the capacity of the stored keyvals ensures that a new
// backing array is created if the slice must grow in Log or With.
// Using the extra capacity without copying risks a data race that
// would violate the Logger interface contract.
n := len(l.sKeyvals) + len(keyvals)
if len(keyvals)%2 != 0 {
n++
}
kvs := make([]interface{}, 0, n)
kvs = append(kvs, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
kvs = append(l.sKeyvals, kvs...)
return &context{
logger: l.logger,
keyvals: l.keyvals,
hasValuer: l.hasValuer,
sKeyvals: kvs,
sHasValuer: l.sHasValuer || containsValuer(keyvals),
}
}
// context is the Logger implementation returned by With, WithPrefix, and
// WithSuffix. It wraps a Logger and holds keyvals that it includes in all
// log events. Its Log method calls bindValues to generate values for each
// Valuer in the context keyvals.
//
// A context must always have the same number of stack frames between calls to
// its Log method and the eventual binding of Valuers to their value. This
// requirement comes from the functional requirement to allow a context to
// resolve application call site information for a Caller stored in the
// context. To do this we must be able to predict the number of logging
// functions on the stack when bindValues is called.
//
// Two implementation details provide the needed stack depth consistency.
//
// 1. newContext avoids introducing an additional layer when asked to
// wrap another context.
// 2. With, WithPrefix, and WithSuffix avoid introducing an additional
// layer by returning a newly constructed context with a merged keyvals
// rather than simply wrapping the existing context.
type context struct {
logger Logger
keyvals []interface{}
sKeyvals []interface{} // suffixes
hasValuer bool
sHasValuer bool
}
func newContext(logger Logger) *context {
if c, ok := logger.(*context); ok {
return c
}
return &context{logger: logger}
}
// Log replaces all value elements (odd indexes) containing a Valuer in the
// stored context with their generated value, appends keyvals, and passes the
// result to the wrapped Logger.
func (l *context) Log(keyvals ...interface{}) error {
kvs := append(l.keyvals, keyvals...)
if len(kvs)%2 != 0 {
kvs = append(kvs, ErrMissingValue)
}
if l.hasValuer {
// If no keyvals were appended above then we must copy l.keyvals so
// that future log events will reevaluate the stored Valuers.
if len(keyvals) == 0 {
kvs = append([]interface{}{}, l.keyvals...)
}
bindValues(kvs[:(len(l.keyvals))])
}
kvs = append(kvs, l.sKeyvals...)
if l.sHasValuer {
bindValues(kvs[len(kvs)-len(l.sKeyvals):])
}
return l.logger.Log(kvs...)
}
// LoggerFunc is an adapter to allow use of ordinary functions as Loggers. If
// f is a function with the appropriate signature, LoggerFunc(f) is a Logger
// object that calls f.
type LoggerFunc func(...interface{}) error
// Log implements Logger by calling f(keyvals...).
func (f LoggerFunc) Log(keyvals ...interface{}) error {
return f(keyvals...)
}

62
vendor/github.com/go-kit/log/logfmt_logger.go generated vendored Normal file
View file

@ -0,0 +1,62 @@
package log
import (
"bytes"
"io"
"sync"
"github.com/go-logfmt/logfmt"
)
type logfmtEncoder struct {
*logfmt.Encoder
buf bytes.Buffer
}
func (l *logfmtEncoder) Reset() {
l.Encoder.Reset()
l.buf.Reset()
}
var logfmtEncoderPool = sync.Pool{
New: func() interface{} {
var enc logfmtEncoder
enc.Encoder = logfmt.NewEncoder(&enc.buf)
return &enc
},
}
type logfmtLogger struct {
w io.Writer
}
// NewLogfmtLogger returns a logger that encodes keyvals to the Writer in
// logfmt format. Each log event produces no more than one call to w.Write.
// The passed Writer must be safe for concurrent use by multiple goroutines if
// the returned Logger will be used concurrently.
func NewLogfmtLogger(w io.Writer) Logger {
return &logfmtLogger{w}
}
func (l logfmtLogger) Log(keyvals ...interface{}) error {
enc := logfmtEncoderPool.Get().(*logfmtEncoder)
enc.Reset()
defer logfmtEncoderPool.Put(enc)
if err := enc.EncodeKeyvals(keyvals...); err != nil {
return err
}
// Add newline to the end of the buffer
if err := enc.EndRecord(); err != nil {
return err
}
// The Logger interface requires implementations to be safe for concurrent
// use by multiple goroutines. For this implementation that means making
// only one call to l.w.Write() for each call to Log.
if _, err := l.w.Write(enc.buf.Bytes()); err != nil {
return err
}
return nil
}

8
vendor/github.com/go-kit/log/nop_logger.go generated vendored Normal file
View file

@ -0,0 +1,8 @@
package log
type nopLogger struct{}
// NewNopLogger returns a logger that doesn't do anything.
func NewNopLogger() Logger { return nopLogger{} }
func (nopLogger) Log(...interface{}) error { return nil }

1
vendor/github.com/go-kit/log/staticcheck.conf generated vendored Normal file
View file

@ -0,0 +1 @@
checks = ["all"]

151
vendor/github.com/go-kit/log/stdlib.go generated vendored Normal file
View file

@ -0,0 +1,151 @@
package log
import (
"bytes"
"io"
"log"
"regexp"
"strings"
)
// StdlibWriter implements io.Writer by invoking the stdlib log.Print. It's
// designed to be passed to a Go kit logger as the writer, for cases where
// it's necessary to redirect all Go kit log output to the stdlib logger.
//
// If you have any choice in the matter, you shouldn't use this. Prefer to
// redirect the stdlib log to the Go kit logger via NewStdlibAdapter.
type StdlibWriter struct{}
// Write implements io.Writer.
func (w StdlibWriter) Write(p []byte) (int, error) {
log.Print(strings.TrimSpace(string(p)))
return len(p), nil
}
// StdlibAdapter wraps a Logger and allows it to be passed to the stdlib
// logger's SetOutput. It will extract date/timestamps, filenames, and
// messages, and place them under relevant keys.
type StdlibAdapter struct {
Logger
timestampKey string
fileKey string
messageKey string
prefix string
joinPrefixToMsg bool
}
// StdlibAdapterOption sets a parameter for the StdlibAdapter.
type StdlibAdapterOption func(*StdlibAdapter)
// TimestampKey sets the key for the timestamp field. By default, it's "ts".
func TimestampKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.timestampKey = key }
}
// FileKey sets the key for the file and line field. By default, it's "caller".
func FileKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.fileKey = key }
}
// MessageKey sets the key for the actual log message. By default, it's "msg".
func MessageKey(key string) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.messageKey = key }
}
// Prefix configures the adapter to parse a prefix from stdlib log events. If
// you provide a non-empty prefix to the stdlib logger, then your should provide
// that same prefix to the adapter via this option.
//
// By default, the prefix isn't included in the msg key. Set joinPrefixToMsg to
// true if you want to include the parsed prefix in the msg.
func Prefix(prefix string, joinPrefixToMsg bool) StdlibAdapterOption {
return func(a *StdlibAdapter) { a.prefix = prefix; a.joinPrefixToMsg = joinPrefixToMsg }
}
// NewStdlibAdapter returns a new StdlibAdapter wrapper around the passed
// logger. It's designed to be passed to log.SetOutput.
func NewStdlibAdapter(logger Logger, options ...StdlibAdapterOption) io.Writer {
a := StdlibAdapter{
Logger: logger,
timestampKey: "ts",
fileKey: "caller",
messageKey: "msg",
}
for _, option := range options {
option(&a)
}
return a
}
func (a StdlibAdapter) Write(p []byte) (int, error) {
p = a.handlePrefix(p)
result := subexps(p)
keyvals := []interface{}{}
var timestamp string
if date, ok := result["date"]; ok && date != "" {
timestamp = date
}
if time, ok := result["time"]; ok && time != "" {
if timestamp != "" {
timestamp += " "
}
timestamp += time
}
if timestamp != "" {
keyvals = append(keyvals, a.timestampKey, timestamp)
}
if file, ok := result["file"]; ok && file != "" {
keyvals = append(keyvals, a.fileKey, file)
}
if msg, ok := result["msg"]; ok {
msg = a.handleMessagePrefix(msg)
keyvals = append(keyvals, a.messageKey, msg)
}
if err := a.Logger.Log(keyvals...); err != nil {
return 0, err
}
return len(p), nil
}
func (a StdlibAdapter) handlePrefix(p []byte) []byte {
if a.prefix != "" {
p = bytes.TrimPrefix(p, []byte(a.prefix))
}
return p
}
func (a StdlibAdapter) handleMessagePrefix(msg string) string {
if a.prefix == "" {
return msg
}
msg = strings.TrimPrefix(msg, a.prefix)
if a.joinPrefixToMsg {
msg = a.prefix + msg
}
return msg
}
const (
logRegexpDate = `(?P<date>[0-9]{4}/[0-9]{2}/[0-9]{2})?[ ]?`
logRegexpTime = `(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?)?[ ]?`
logRegexpFile = `(?P<file>.+?:[0-9]+)?`
logRegexpMsg = `(: )?(?P<msg>(?s:.*))`
)
var (
logRegexp = regexp.MustCompile(logRegexpDate + logRegexpTime + logRegexpFile + logRegexpMsg)
)
func subexps(line []byte) map[string]string {
m := logRegexp.FindSubmatch(line)
if len(m) < len(logRegexp.SubexpNames()) {
return map[string]string{}
}
result := map[string]string{}
for i, name := range logRegexp.SubexpNames() {
result[name] = strings.TrimRight(string(m[i]), "\n")
}
return result
}

113
vendor/github.com/go-kit/log/sync.go generated vendored Normal file
View file

@ -0,0 +1,113 @@
package log
import (
"io"
"sync"
"sync/atomic"
)
// SwapLogger wraps another logger that may be safely replaced while other
// goroutines use the SwapLogger concurrently. The zero value for a SwapLogger
// will discard all log events without error.
//
// SwapLogger serves well as a package global logger that can be changed by
// importers.
type SwapLogger struct {
logger atomic.Value
}
type loggerStruct struct {
Logger
}
// Log implements the Logger interface by forwarding keyvals to the currently
// wrapped logger. It does not log anything if the wrapped logger is nil.
func (l *SwapLogger) Log(keyvals ...interface{}) error {
s, ok := l.logger.Load().(loggerStruct)
if !ok || s.Logger == nil {
return nil
}
return s.Log(keyvals...)
}
// Swap replaces the currently wrapped logger with logger. Swap may be called
// concurrently with calls to Log from other goroutines.
func (l *SwapLogger) Swap(logger Logger) {
l.logger.Store(loggerStruct{logger})
}
// NewSyncWriter returns a new writer that is safe for concurrent use by
// multiple goroutines. Writes to the returned writer are passed on to w. If
// another write is already in progress, the calling goroutine blocks until
// the writer is available.
//
// If w implements the following interface, so does the returned writer.
//
// interface {
// Fd() uintptr
// }
func NewSyncWriter(w io.Writer) io.Writer {
switch w := w.(type) {
case fdWriter:
return &fdSyncWriter{fdWriter: w}
default:
return &syncWriter{Writer: w}
}
}
// syncWriter synchronizes concurrent writes to an io.Writer.
type syncWriter struct {
sync.Mutex
io.Writer
}
// Write writes p to the underlying io.Writer. If another write is already in
// progress, the calling goroutine blocks until the syncWriter is available.
func (w *syncWriter) Write(p []byte) (n int, err error) {
w.Lock()
defer w.Unlock()
return w.Writer.Write(p)
}
// fdWriter is an io.Writer that also has an Fd method. The most common
// example of an fdWriter is an *os.File.
type fdWriter interface {
io.Writer
Fd() uintptr
}
// fdSyncWriter synchronizes concurrent writes to an fdWriter.
type fdSyncWriter struct {
sync.Mutex
fdWriter
}
// Write writes p to the underlying io.Writer. If another write is already in
// progress, the calling goroutine blocks until the fdSyncWriter is available.
func (w *fdSyncWriter) Write(p []byte) (n int, err error) {
w.Lock()
defer w.Unlock()
return w.fdWriter.Write(p)
}
// syncLogger provides concurrent safe logging for another Logger.
type syncLogger struct {
mu sync.Mutex
logger Logger
}
// NewSyncLogger returns a logger that synchronizes concurrent use of the
// wrapped logger. When multiple goroutines use the SyncLogger concurrently
// only one goroutine will be allowed to log to the wrapped logger at a time.
// The other goroutines will block until the logger is available.
func NewSyncLogger(logger Logger) Logger {
return &syncLogger{logger: logger}
}
// Log logs keyvals to the underlying Logger. If another log is already in
// progress, the calling goroutine blocks until the syncLogger is available.
func (l *syncLogger) Log(keyvals ...interface{}) error {
l.mu.Lock()
defer l.mu.Unlock()
return l.logger.Log(keyvals...)
}

110
vendor/github.com/go-kit/log/value.go generated vendored Normal file
View file

@ -0,0 +1,110 @@
package log
import (
"runtime"
"strconv"
"strings"
"time"
)
// A Valuer generates a log value. When passed to With, WithPrefix, or
// WithSuffix in a value element (odd indexes), it represents a dynamic
// value which is re-evaluated with each log event.
type Valuer func() interface{}
// bindValues replaces all value elements (odd indexes) containing a Valuer
// with their generated value.
func bindValues(keyvals []interface{}) {
for i := 1; i < len(keyvals); i += 2 {
if v, ok := keyvals[i].(Valuer); ok {
keyvals[i] = v()
}
}
}
// containsValuer returns true if any of the value elements (odd indexes)
// contain a Valuer.
func containsValuer(keyvals []interface{}) bool {
for i := 1; i < len(keyvals); i += 2 {
if _, ok := keyvals[i].(Valuer); ok {
return true
}
}
return false
}
// Timestamp returns a timestamp Valuer. It invokes the t function to get the
// time; unless you are doing something tricky, pass time.Now.
//
// Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which
// are TimestampFormats that use the RFC3339Nano format.
func Timestamp(t func() time.Time) Valuer {
return func() interface{} { return t() }
}
// TimestampFormat returns a timestamp Valuer with a custom time format. It
// invokes the t function to get the time to format; unless you are doing
// something tricky, pass time.Now. The layout string is passed to
// Time.Format.
//
// Most users will want to use DefaultTimestamp or DefaultTimestampUTC, which
// are TimestampFormats that use the RFC3339Nano format.
func TimestampFormat(t func() time.Time, layout string) Valuer {
return func() interface{} {
return timeFormat{
time: t(),
layout: layout,
}
}
}
// A timeFormat represents an instant in time and a layout used when
// marshaling to a text format.
type timeFormat struct {
time time.Time
layout string
}
func (tf timeFormat) String() string {
return tf.time.Format(tf.layout)
}
// MarshalText implements encoding.TextMarshaller.
func (tf timeFormat) MarshalText() (text []byte, err error) {
// The following code adapted from the standard library time.Time.Format
// method. Using the same undocumented magic constant to extend the size
// of the buffer as seen there.
b := make([]byte, 0, len(tf.layout)+10)
b = tf.time.AppendFormat(b, tf.layout)
return b, nil
}
// Caller returns a Valuer that returns a file and line from a specified depth
// in the callstack. Users will probably want to use DefaultCaller.
func Caller(depth int) Valuer {
return func() interface{} {
_, file, line, _ := runtime.Caller(depth)
idx := strings.LastIndexByte(file, '/')
// using idx+1 below handles both of following cases:
// idx == -1 because no "/" was found, or
// idx >= 0 and we want to start at the character after the found "/".
return file[idx+1:] + ":" + strconv.Itoa(line)
}
}
var (
// DefaultTimestamp is a Valuer that returns the current wallclock time,
// respecting time zones, when bound.
DefaultTimestamp = TimestampFormat(time.Now, time.RFC3339Nano)
// DefaultTimestampUTC is a Valuer that returns the current time in UTC
// when bound.
DefaultTimestampUTC = TimestampFormat(
func() time.Time { return time.Now().UTC() },
time.RFC3339Nano,
)
// DefaultCaller is a Valuer that returns the file and line where the Log
// method was invoked. It can only be used with log.With.
DefaultCaller = Caller(3)
)

1
vendor/github.com/go-logfmt/logfmt/.gitignore generated vendored Normal file
View file

@ -0,0 +1 @@
.vscode/

48
vendor/github.com/go-logfmt/logfmt/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,48 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.5.0] - 2020-01-03
### Changed
- Remove the dependency on github.com/kr/logfmt by [@ChrisHines]
- Move fuzz code to github.com/go-logfmt/fuzzlogfmt by [@ChrisHines]
## [0.4.0] - 2018-11-21
### Added
- Go module support by [@ChrisHines]
- CHANGELOG by [@ChrisHines]
### Changed
- Drop invalid runes from keys instead of returning ErrInvalidKey by [@ChrisHines]
- On panic while printing, attempt to print panic value by [@bboreham]
## [0.3.0] - 2016-11-15
### Added
- Pool buffers for quoted strings and byte slices by [@nussjustin]
### Fixed
- Fuzz fix, quote invalid UTF-8 values by [@judwhite]
## [0.2.0] - 2016-05-08
### Added
- Encoder.EncodeKeyvals by [@ChrisHines]
## [0.1.0] - 2016-03-28
### Added
- Encoder by [@ChrisHines]
- Decoder by [@ChrisHines]
- MarshalKeyvals by [@ChrisHines]
[0.5.0]: https://github.com/go-logfmt/logfmt/compare/v0.4.0...v0.5.0
[0.4.0]: https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0
[0.3.0]: https://github.com/go-logfmt/logfmt/compare/v0.2.0...v0.3.0
[0.2.0]: https://github.com/go-logfmt/logfmt/compare/v0.1.0...v0.2.0
[0.1.0]: https://github.com/go-logfmt/logfmt/commits/v0.1.0
[@ChrisHines]: https://github.com/ChrisHines
[@bboreham]: https://github.com/bboreham
[@judwhite]: https://github.com/judwhite
[@nussjustin]: https://github.com/nussjustin

Some files were not shown because too many files have changed in this diff Show more