Compare commits

...

10 Commits

Author SHA1 Message Date
Kubernetes Prow Robot
3dceb4a885
Merge pull request #925 from kubernetes-csi/dependabot/go_modules/golang.org/x/net-0.41.0
Some checks failed
Windows Tests / build (^1.16, windows-latest) (push) Has been cancelled
Trivy vulnerability scanner / Build (push) Has been cancelled
Static Checks / Go Lint (push) Has been cancelled
Static Checks / Verify Helm (push) Has been cancelled
CodeQL / Analyze (go) (push) Failing after 16s
ShellCheck / Shellcheck (push) Has been cancelled
k8s api version check / Build (push) Has been cancelled
Darwin / Unit Tests (push) Has been cancelled
codespell / Check for spelling errors (push) Has been cancelled
Linux Unit tests / Build (push) Has been cancelled
chore(deps): bump golang.org/x/net from 0.40.0 to 0.41.0
2025-06-18 03:10:51 -07:00
Kubernetes Prow Robot
81c2640b91
Merge pull request #928 from kubernetes-csi/dependabot/docker/build-image/debian-base-bookworm-v1.0.5
chore(deps): bump build-image/debian-base from bookworm-v1.0.4 to bookworm-v1.0.5
2025-06-17 19:56:51 -07:00
dependabot[bot]
8535576a62
chore(deps): bump build-image/debian-base
Bumps build-image/debian-base from bookworm-v1.0.4 to bookworm-v1.0.5.

---
updated-dependencies:
- dependency-name: build-image/debian-base
  dependency-version: bookworm-v1.0.5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-17 17:21:58 +00:00
Andy Zhang
e973ab1c85
Merge pull request #926 from andyzhangx/CVE-2025-4673
test: fix CVE-2025-4673 in trivy action
2025-06-14 05:37:04 +03:00
andyzhangx
edf6441990
test: fix CVE-2025-4673 in trivy action 2025-06-14 01:53:34 +00:00
dependabot[bot]
3937101a67
chore(deps): bump golang.org/x/net from 0.40.0 to 0.41.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.40.0 to 0.41.0.
- [Commits](https://github.com/golang/net/compare/v0.40.0...v0.41.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-09 22:06:52 +00:00
Kubernetes Prow Robot
c243947a33
Merge pull request #924 from kubernetes-csi/dependabot/go_modules/golang.org/x/mod-0.25.0
chore(deps): bump golang.org/x/mod from 0.24.0 to 0.25.0
2025-06-06 19:48:38 -07:00
dependabot[bot]
a1a247dbd2
chore(deps): bump golang.org/x/mod from 0.24.0 to 0.25.0
Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.24.0 to 0.25.0.
- [Commits](https://github.com/golang/mod/compare/v0.24.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/mod
  dependency-version: 0.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-06 22:47:02 +00:00
Kubernetes Prow Robot
3e009a14a6
Merge pull request #922 from kubernetes-csi/dependabot/go_modules/google.golang.org/grpc-1.73.0
chore(deps): bump google.golang.org/grpc from 1.72.2 to 1.73.0
2025-06-05 21:36:48 -07:00
dependabot[bot]
5e602c40a2
chore(deps): bump google.golang.org/grpc from 1.72.2 to 1.73.0
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.72.2 to 1.73.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.72.2...v1.73.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.73.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-05 22:38:30 +00:00
94 changed files with 3277 additions and 777 deletions

View File

@ -15,7 +15,7 @@ jobs:
- name: Install go - name: Install go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.24.2 go-version: 1.24.4
- name: Build an image from Dockerfile - name: Build an image from Dockerfile
run: | run: |

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
FROM registry.k8s.io/build-image/debian-base:bookworm-v1.0.4 FROM registry.k8s.io/build-image/debian-base:bookworm-v1.0.5
ARG ARCH ARG ARCH
ARG binary=./bin/${ARCH}/nfsplugin ARG binary=./bin/${ARCH}/nfsplugin

28
go.mod
View File

@ -10,8 +10,8 @@ require (
github.com/pborman/uuid v1.2.1 github.com/pborman/uuid v1.2.1
github.com/stretchr/testify v1.10.0 github.com/stretchr/testify v1.10.0
go.uber.org/goleak v1.3.0 go.uber.org/goleak v1.3.0
golang.org/x/net v0.40.0 golang.org/x/net v0.41.0
google.golang.org/grpc v1.72.2 google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6 google.golang.org/protobuf v1.36.6
k8s.io/api v0.31.6 k8s.io/api v0.31.6
k8s.io/apimachinery v0.31.6 k8s.io/apimachinery v0.31.6
@ -95,28 +95,28 @@ require (
go.etcd.io/etcd/client/v3 v3.5.14 // indirect go.etcd.io/etcd/client/v3 v3.5.14 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
go.opentelemetry.io/otel v1.34.0 // indirect go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.34.0 // indirect go.opentelemetry.io/otel/sdk v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.34.0 // indirect go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect go.uber.org/zap v1.26.0 // indirect
golang.org/x/crypto v0.38.0 // indirect golang.org/x/crypto v0.39.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/mod v0.24.0 golang.org/x/mod v0.25.0
golang.org/x/oauth2 v0.26.0 // indirect golang.org/x/oauth2 v0.28.0 // indirect
golang.org/x/sync v0.14.0 // indirect golang.org/x/sync v0.15.0 // indirect
golang.org/x/sys v0.33.0 // indirect golang.org/x/sys v0.33.0 // indirect
golang.org/x/term v0.32.0 // indirect golang.org/x/term v0.32.0 // indirect
golang.org/x/text v0.25.0 // indirect golang.org/x/text v0.26.0 // indirect
golang.org/x/time v0.3.0 // indirect golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.31.0 // indirect golang.org/x/tools v0.33.0 // indirect
google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de // indirect google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a // indirect google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect

60
go.sum
View File

@ -389,20 +389,20 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.5
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0/go.mod h1:azvtTADFQJA8mX80jIH/akaE7h+dbm/sVuaHqN13w74= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0/go.mod h1:azvtTADFQJA8mX80jIH/akaE7h+dbm/sVuaHqN13w74=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY= go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI= go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 h1:qFffATk0X+HD+f1Z8lswGiOQYKHRlzfmdJm0wEaVrFA= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 h1:qFffATk0X+HD+f1Z8lswGiOQYKHRlzfmdJm0wEaVrFA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0/go.mod h1:MOiCmryaYtc+V0Ei+Tx9o5S1ZjA7kzLucuVuyzBZloQ= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0/go.mod h1:MOiCmryaYtc+V0Ei+Tx9o5S1ZjA7kzLucuVuyzBZloQ=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ= go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE= go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A= go.opentelemetry.io/otel/sdk v1.35.0 h1:iPctf8iprVySXSKJffSS79eOjl9pvxV9ZqOWT0QejKY=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU= go.opentelemetry.io/otel/sdk v1.35.0/go.mod h1:+ga1bZliga3DxJ3CQGg3updiaAJoNECOgJREo9KHGQg=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk= go.opentelemetry.io/otel/sdk/metric v1.35.0 h1:1RriWBmCKgkeHEhM7a2uMjMUfP7MsOF5JpUCaEqEI9o=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w= go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k= go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE= go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0= go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8= go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
@ -423,8 +423,8 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8= golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw= golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -448,8 +448,8 @@ golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU= golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -468,14 +468,14 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY= golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds= golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.26.0 h1:afQXWNNaeC4nvZ0Ed9XvCCzXM6UHJG7iCg0W4fPqSBE= golang.org/x/oauth2 v0.28.0 h1:CrgCKl8PPAVtLnU3c+EDw6x11699EWlsDeWNWKdIOkc=
golang.org/x/oauth2 v0.26.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -483,8 +483,8 @@ golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ= golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -514,8 +514,8 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4= golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@ -544,8 +544,8 @@ golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.31.0 h1:0EedkvKDbh+qistFTd0Bcwe/YLh4vHwWEkiI0toFIBU= golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
golang.org/x/tools v0.31.0/go.mod h1:naFTU+Cev749tSJRXJlna0T3WxKvb1kWEx15xA4SdmQ= golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -572,10 +572,10 @@ google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvx
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de h1:F6qOa9AZTYJXOUEr4jDysRDLrm4PHePlge4v4TGAlxY= google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de h1:F6qOa9AZTYJXOUEr4jDysRDLrm4PHePlge4v4TGAlxY=
google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de/go.mod h1:VUhTRKeHn9wwcdrk73nvdC9gF178Tzhmt/qyaFcPLSo= google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de/go.mod h1:VUhTRKeHn9wwcdrk73nvdC9gF178Tzhmt/qyaFcPLSo=
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a h1:nwKuGPlUAt+aR+pcrkfFRrTU1BVrSmYyYMxYbUIVHr0= google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463 h1:hE3bRWtU6uceqlh4fhrSnUyjKHMKB9KrTLLG+bc0ddM=
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a/go.mod h1:3kWAYMk1I75K4vykHtKt2ycnOgpA6974V7bREqbsenU= google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463/go.mod h1:U90ffi8eUL9MwPcrJylN5+Mk2v3vuPDptd5yyNUiRR8=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a h1:51aaUVRocpvUOSQKM6Q7VuoaktNIaMCLuhZB6DKksq4= google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 h1:e0AIkUUhxyBKh6ssZNrAMeqhA7RKUj42346d1y02i2g=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a/go.mod h1:uRxBH1mhmO8PGhU89cMcHaXKZqO+OfakD8QQO0oYwlQ= google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@ -584,8 +584,8 @@ google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQ
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.29.0/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= google.golang.org/grpc v1.29.0/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.72.2 h1:TdbGzwb82ty4OusHWepvFWGLgIbNo1/SUynEN0ssqv8= google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
google.golang.org/grpc v1.72.2/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM= google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=

View File

@ -1,6 +1,7 @@
.DS_Store .DS_Store
Thumbs.db Thumbs.db
.cache/
.tools/ .tools/
venv/ venv/
.idea/ .idea/

View File

@ -25,13 +25,13 @@ linters:
- perfsprint - perfsprint
- revive - revive
- staticcheck - staticcheck
- tenv
- testifylint - testifylint
- typecheck - typecheck
- unconvert - unconvert
- unused - unused
- unparam - unparam
- usestdlibvars - usestdlibvars
- usetesting
issues: issues:
# Maximum issues count per one linter. # Maximum issues count per one linter.
@ -175,132 +175,60 @@ linters-settings:
# This means that linting errors with less than 0.8 confidence will be ignored. # This means that linting errors with less than 0.8 confidence will be ignored.
# Default: 0.8 # Default: 0.8
confidence: 0.01 confidence: 0.01
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
rules: rules:
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#blank-imports
- name: blank-imports - name: blank-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#bool-literal-in-expr
- name: bool-literal-in-expr - name: bool-literal-in-expr
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#constant-logical-expr
- name: constant-logical-expr - name: constant-logical-expr
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#context-as-argument
# TODO (#3372) re-enable linter when it is compatible. https://github.com/golangci/golangci-lint/issues/3280
- name: context-as-argument - name: context-as-argument
disabled: true disabled: true
arguments: arguments:
allowTypesBefore: "*testing.T" - allowTypesBefore: "*testing.T"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#context-keys-type
- name: context-keys-type - name: context-keys-type
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#deep-exit
- name: deep-exit - name: deep-exit
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#defer
- name: defer - name: defer
disabled: false
arguments: arguments:
- ["call-chain", "loop"] - ["call-chain", "loop"]
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#dot-imports
- name: dot-imports - name: dot-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#duplicated-imports
- name: duplicated-imports - name: duplicated-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#early-return
- name: early-return - name: early-return
disabled: false arguments:
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#empty-block - "preserveScope"
- name: empty-block - name: empty-block
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#empty-lines
- name: empty-lines - name: empty-lines
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-naming
- name: error-naming - name: error-naming
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-return
- name: error-return - name: error-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-strings
- name: error-strings - name: error-strings
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#errorf
- name: errorf - name: errorf
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#exported
- name: exported - name: exported
disabled: false
arguments: arguments:
- "sayRepetitiveInsteadOfStutters" - "sayRepetitiveInsteadOfStutters"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#flag-parameter
- name: flag-parameter - name: flag-parameter
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#identical-branches
- name: identical-branches - name: identical-branches
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#if-return
- name: if-return - name: if-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#increment-decrement
- name: increment-decrement
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#indent-error-flow
- name: indent-error-flow
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#import-shadowing
- name: import-shadowing - name: import-shadowing
disabled: false - name: increment-decrement
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#package-comments - name: indent-error-flow
arguments:
- "preserveScope"
- name: package-comments - name: package-comments
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range
- name: range - name: range
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range-val-in-closure
- name: range-val-in-closure - name: range-val-in-closure
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range-val-address
- name: range-val-address - name: range-val-address
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#redefines-builtin-id
- name: redefines-builtin-id - name: redefines-builtin-id
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#string-format
- name: string-format - name: string-format
disabled: false
arguments: arguments:
- - panic - - panic
- '/^[^\n]*$/' - '/^[^\n]*$/'
- must not contain line breaks - must not contain line breaks
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#struct-tag
- name: struct-tag - name: struct-tag
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#superfluous-else
- name: superfluous-else - name: superfluous-else
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#time-equal
- name: time-equal
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#var-naming
- name: var-naming
disabled: false
arguments: arguments:
- ["ID"] # AllowList - "preserveScope"
- ["Otel", "Aws", "Gcp"] # DenyList - name: time-equal
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#var-declaration
- name: var-declaration
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unconditional-recursion
- name: unconditional-recursion - name: unconditional-recursion
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unexported-return
- name: unexported-return - name: unexported-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unhandled-error
- name: unhandled-error - name: unhandled-error
disabled: false
arguments: arguments:
- "fmt.Fprint" - "fmt.Fprint"
- "fmt.Fprintf" - "fmt.Fprintf"
@ -308,15 +236,14 @@ linters-settings:
- "fmt.Print" - "fmt.Print"
- "fmt.Printf" - "fmt.Printf"
- "fmt.Println" - "fmt.Println"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unnecessary-stmt
- name: unnecessary-stmt - name: unnecessary-stmt
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#useless-break
- name: useless-break - name: useless-break
disabled: false - name: var-declaration
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#waitgroup-by-value - name: var-naming
arguments:
- ["ID"] # AllowList
- ["Otel", "Aws", "Gcp"] # DenyList
- name: waitgroup-by-value - name: waitgroup-by-value
disabled: false
testifylint: testifylint:
enable-all: true enable-all: true
disable: disable:

View File

@ -11,6 +11,46 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
<!-- Released section --> <!-- Released section -->
<!-- Don't change this section unless doing release --> <!-- Don't change this section unless doing release -->
## [1.35.0/0.57.0/0.11.0] 2025-03-05
This release is the last to support [Go 1.22].
The next release will require at least [Go 1.23].
### Added
- Add `ValueFromAttribute` and `KeyValueFromAttribute` in `go.opentelemetry.io/otel/log`. (#6180)
- Add `EventName` and `SetEventName` to `Record` in `go.opentelemetry.io/otel/log`. (#6187)
- Add `EventName` to `RecordFactory` in `go.opentelemetry.io/otel/log/logtest`. (#6187)
- `AssertRecordEqual` in `go.opentelemetry.io/otel/log/logtest` checks `Record.EventName`. (#6187)
- Add `EventName` and `SetEventName` to `Record` in `go.opentelemetry.io/otel/sdk/log`. (#6193)
- Add `EventName` to `RecordFactory` in `go.opentelemetry.io/otel/sdk/log/logtest`. (#6193)
- Emit `Record.EventName` field in `go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc`. (#6211)
- Emit `Record.EventName` field in `go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp`. (#6211)
- Emit `Record.EventName` field in `go.opentelemetry.io/otel/exporters/stdout/stdoutlog` (#6210)
- The `go.opentelemetry.io/otel/semconv/v1.28.0` package.
The package contains semantic conventions from the `v1.28.0` version of the OpenTelemetry Semantic Conventions.
See the [migration documentation](./semconv/v1.28.0/MIGRATION.md) for information on how to upgrade from `go.opentelemetry.io/otel/semconv/v1.27.0`(#6236)
- The `go.opentelemetry.io/otel/semconv/v1.30.0` package.
The package contains semantic conventions from the `v1.30.0` version of the OpenTelemetry Semantic Conventions.
See the [migration documentation](./semconv/v1.30.0/MIGRATION.md) for information on how to upgrade from `go.opentelemetry.io/otel/semconv/v1.28.0`(#6240)
- Document the pitfalls of using `Resource` as a comparable type.
`Resource.Equal` and `Resource.Equivalent` should be used instead. (#6272)
- Support [Go 1.24]. (#6304)
- Add `FilterProcessor` and `EnabledParameters` in `go.opentelemetry.io/otel/sdk/log`.
It replaces `go.opentelemetry.io/otel/sdk/log/internal/x.FilterProcessor`.
Compared to previous version it additionally gives the possibility to filter by resource and instrumentation scope. (#6317)
### Changed
- Update `github.com/prometheus/common` to `v0.62.0`, which changes the `NameValidationScheme` to `NoEscaping`.
This allows metrics names to keep original delimiters (e.g. `.`), rather than replacing with underscores.
This is controlled by the `Content-Type` header, or can be reverted by setting `NameValidationScheme` to `LegacyValidation` in `github.com/prometheus/common/model`. (#6198)
### Fixes
- Eliminate goroutine leak for the processor returned by `NewSimpleSpanProcessor` in `go.opentelemetry.io/otel/sdk/trace` when `Shutdown` is called and the passed `ctx` is canceled and `SpanExporter.Shutdown` has not returned. (#6368)
- Eliminate goroutine leak for the processor returned by `NewBatchSpanProcessor` in `go.opentelemetry.io/otel/sdk/trace` when `ForceFlush` is called and the passed `ctx` is canceled and `SpanExporter.Export` has not returned. (#6369)
## [1.34.0/0.56.0/0.10.0] 2025-01-17 ## [1.34.0/0.56.0/0.10.0] 2025-01-17
### Changed ### Changed
@ -3197,7 +3237,8 @@ It contains api and sdk for trace and meter.
- CircleCI build CI manifest files. - CircleCI build CI manifest files.
- CODEOWNERS file to track owners of this project. - CODEOWNERS file to track owners of this project.
[Unreleased]: https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...HEAD [Unreleased]: https://github.com/open-telemetry/opentelemetry-go/compare/v1.35.0...HEAD
[1.35.0/0.57.0/0.11.0]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.35.0
[1.34.0/0.56.0/0.10.0]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.34.0 [1.34.0/0.56.0/0.10.0]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.34.0
[1.33.0/0.55.0/0.9.0/0.0.12]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.33.0 [1.33.0/0.55.0/0.9.0/0.0.12]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.33.0
[1.32.0/0.54.0/0.8.0/0.0.11]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.32.0 [1.32.0/0.54.0/0.8.0/0.0.11]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.32.0
@ -3288,6 +3329,7 @@ It contains api and sdk for trace and meter.
<!-- Released section ended --> <!-- Released section ended -->
[Go 1.24]: https://go.dev/doc/go1.24
[Go 1.23]: https://go.dev/doc/go1.23 [Go 1.23]: https://go.dev/doc/go1.23
[Go 1.22]: https://go.dev/doc/go1.22 [Go 1.22]: https://go.dev/doc/go1.22
[Go 1.21]: https://go.dev/doc/go1.21 [Go 1.21]: https://go.dev/doc/go1.21

View File

@ -181,6 +181,18 @@ patterns in the spec.
For a deeper discussion, see For a deeper discussion, see
[this](https://github.com/open-telemetry/opentelemetry-specification/issues/165). [this](https://github.com/open-telemetry/opentelemetry-specification/issues/165).
## Tests
Each functionality should be covered by tests.
Performance-critical functionality should also be covered by benchmarks.
- Pull requests adding a performance-critical functionality
should have `go test -bench` output in their description.
- Pull requests changing a performance-critical functionality
should have [`benchstat`](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat)
output in their description.
## Documentation ## Documentation
Each (non-internal, non-test) package must be documented using Each (non-internal, non-test) package must be documented using

View File

@ -11,6 +11,10 @@ ALL_COVERAGE_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {}
GO = go GO = go
TIMEOUT = 60 TIMEOUT = 60
# User to run as in docker images.
DOCKER_USER=$(shell id -u):$(shell id -g)
DEPENDENCIES_DOCKERFILE=./dependencies.Dockerfile
.DEFAULT_GOAL := precommit .DEFAULT_GOAL := precommit
.PHONY: precommit ci .PHONY: precommit ci
@ -81,20 +85,20 @@ PIP := $(PYTOOLS)/pip
WORKDIR := /workdir WORKDIR := /workdir
# The python image to use for the virtual environment. # The python image to use for the virtual environment.
PYTHONIMAGE := python:3.11.3-slim-bullseye PYTHONIMAGE := $(shell awk '$$4=="python" {print $$2}' $(DEPENDENCIES_DOCKERFILE))
# Run the python image with the current directory mounted. # Run the python image with the current directory mounted.
DOCKERPY := docker run --rm -v "$(CURDIR):$(WORKDIR)" -w $(WORKDIR) $(PYTHONIMAGE) DOCKERPY := docker run --rm -u $(DOCKER_USER) -v "$(CURDIR):$(WORKDIR)" -w $(WORKDIR) $(PYTHONIMAGE)
# Create a virtual environment for Python tools. # Create a virtual environment for Python tools.
$(PYTOOLS): $(PYTOOLS):
# The `--upgrade` flag is needed to ensure that the virtual environment is # The `--upgrade` flag is needed to ensure that the virtual environment is
# created with the latest pip version. # created with the latest pip version.
@$(DOCKERPY) bash -c "python3 -m venv $(VENVDIR) && $(PIP) install --upgrade pip" @$(DOCKERPY) bash -c "python3 -m venv $(VENVDIR) && $(PIP) install --upgrade --cache-dir=$(WORKDIR)/.cache/pip pip"
# Install python packages into the virtual environment. # Install python packages into the virtual environment.
$(PYTOOLS)/%: $(PYTOOLS) $(PYTOOLS)/%: $(PYTOOLS)
@$(DOCKERPY) $(PIP) install -r requirements.txt @$(DOCKERPY) $(PIP) install --cache-dir=$(WORKDIR)/.cache/pip -r requirements.txt
CODESPELL = $(PYTOOLS)/codespell CODESPELL = $(PYTOOLS)/codespell
$(CODESPELL): PACKAGE=codespell $(CODESPELL): PACKAGE=codespell
@ -119,7 +123,7 @@ vanity-import-fix: $(PORTO)
# Generate go.work file for local development. # Generate go.work file for local development.
.PHONY: go-work .PHONY: go-work
go-work: $(CROSSLINK) go-work: $(CROSSLINK)
$(CROSSLINK) work --root=$(shell pwd) $(CROSSLINK) work --root=$(shell pwd) --go=1.22.7
# Build # Build
@ -265,13 +269,30 @@ check-clean-work-tree:
exit 1; \ exit 1; \
fi fi
# The weaver docker image to use for semconv-generate.
WEAVER_IMAGE := $(shell awk '$$4=="weaver" {print $$2}' $(DEPENDENCIES_DOCKERFILE))
SEMCONVPKG ?= "semconv/" SEMCONVPKG ?= "semconv/"
.PHONY: semconv-generate .PHONY: semconv-generate
semconv-generate: $(SEMCONVGEN) $(SEMCONVKIT) semconv-generate: $(SEMCONVKIT)
[ "$(TAG)" ] || ( echo "TAG unset: missing opentelemetry semantic-conventions tag"; exit 1 ) [ "$(TAG)" ] || ( echo "TAG unset: missing opentelemetry semantic-conventions tag"; exit 1 )
[ "$(OTEL_SEMCONV_REPO)" ] || ( echo "OTEL_SEMCONV_REPO unset: missing path to opentelemetry semantic-conventions repo"; exit 1 ) # Ensure the target directory for source code is available.
$(SEMCONVGEN) -i "$(OTEL_SEMCONV_REPO)/model/." --only=attribute_group -p conventionType=trace -f attribute_group.go -z "$(SEMCONVPKG)/capitalizations.txt" -t "$(SEMCONVPKG)/template.j2" -s "$(TAG)" mkdir -p $(PWD)/$(SEMCONVPKG)/${TAG}
$(SEMCONVGEN) -i "$(OTEL_SEMCONV_REPO)/model/." --only=metric -f metric.go -t "$(SEMCONVPKG)/metric_template.j2" -s "$(TAG)" # Note: We mount a home directory for downloading/storing the semconv repository.
# Weaver will automatically clean the cache when finished, but the directories will remain.
mkdir -p ~/.weaver
docker run --rm \
-u $(DOCKER_USER) \
--env HOME=/tmp/weaver \
--mount 'type=bind,source=$(PWD)/semconv,target=/home/weaver/templates/registry/go,readonly' \
--mount 'type=bind,source=$(PWD)/semconv/${TAG},target=/home/weaver/target' \
--mount 'type=bind,source=$(HOME)/.weaver,target=/tmp/weaver/.weaver' \
$(WEAVER_IMAGE) registry generate \
--registry=https://github.com/open-telemetry/semantic-conventions/archive/refs/tags/$(TAG).zip[model] \
--templates=/home/weaver/templates \
--param tag=$(TAG) \
go \
/home/weaver/target
$(SEMCONVKIT) -output "$(SEMCONVPKG)/$(TAG)" -tag "$(TAG)" $(SEMCONVKIT) -output "$(SEMCONVPKG)/$(TAG)" -tag "$(TAG)"
.PHONY: gorelease .PHONY: gorelease

View File

@ -4,6 +4,8 @@
[![codecov.io](https://codecov.io/gh/open-telemetry/opentelemetry-go/coverage.svg?branch=main)](https://app.codecov.io/gh/open-telemetry/opentelemetry-go?branch=main) [![codecov.io](https://codecov.io/gh/open-telemetry/opentelemetry-go/coverage.svg?branch=main)](https://app.codecov.io/gh/open-telemetry/opentelemetry-go?branch=main)
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel)](https://pkg.go.dev/go.opentelemetry.io/otel) [![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel)](https://pkg.go.dev/go.opentelemetry.io/otel)
[![Go Report Card](https://goreportcard.com/badge/go.opentelemetry.io/otel)](https://goreportcard.com/report/go.opentelemetry.io/otel) [![Go Report Card](https://goreportcard.com/badge/go.opentelemetry.io/otel)](https://goreportcard.com/report/go.opentelemetry.io/otel)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/open-telemetry/opentelemetry-go/badge)](https://scorecard.dev/viewer/?uri=github.com/open-telemetry/opentelemetry-go)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/9996/badge)](https://www.bestpractices.dev/projects/9996)
[![Slack](https://img.shields.io/badge/slack-@cncf/otel--go-brightgreen.svg?logo=slack)](https://cloud-native.slack.com/archives/C01NPAXACKT) [![Slack](https://img.shields.io/badge/slack-@cncf/otel--go-brightgreen.svg?logo=slack)](https://cloud-native.slack.com/archives/C01NPAXACKT)
OpenTelemetry-Go is the [Go](https://golang.org/) implementation of [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry-Go is the [Go](https://golang.org/) implementation of [OpenTelemetry](https://opentelemetry.io/).
@ -49,18 +51,25 @@ Currently, this project supports the following environments.
| OS | Go Version | Architecture | | OS | Go Version | Architecture |
|----------|------------|--------------| |----------|------------|--------------|
| Ubuntu | 1.24 | amd64 |
| Ubuntu | 1.23 | amd64 | | Ubuntu | 1.23 | amd64 |
| Ubuntu | 1.22 | amd64 | | Ubuntu | 1.22 | amd64 |
| Ubuntu | 1.24 | 386 |
| Ubuntu | 1.23 | 386 | | Ubuntu | 1.23 | 386 |
| Ubuntu | 1.22 | 386 | | Ubuntu | 1.22 | 386 |
| Linux | 1.23 | arm64 | | Ubuntu | 1.24 | arm64 |
| Linux | 1.22 | arm64 | | Ubuntu | 1.23 | arm64 |
| Ubuntu | 1.22 | arm64 |
| macOS 13 | 1.24 | amd64 |
| macOS 13 | 1.23 | amd64 | | macOS 13 | 1.23 | amd64 |
| macOS 13 | 1.22 | amd64 | | macOS 13 | 1.22 | amd64 |
| macOS | 1.24 | arm64 |
| macOS | 1.23 | arm64 | | macOS | 1.23 | arm64 |
| macOS | 1.22 | arm64 | | macOS | 1.22 | arm64 |
| Windows | 1.24 | amd64 |
| Windows | 1.23 | amd64 | | Windows | 1.23 | amd64 |
| Windows | 1.22 | amd64 | | Windows | 1.22 | amd64 |
| Windows | 1.24 | 386 |
| Windows | 1.23 | 386 | | Windows | 1.23 | 386 |
| Windows | 1.22 | 386 | | Windows | 1.22 | 386 |

View File

@ -5,17 +5,14 @@
New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated. New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated.
The `semconv-generate` make target is used for this. The `semconv-generate` make target is used for this.
1. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag. 1. Set the `TAG` environment variable to the semantic convention tag you want to generate.
2. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest` 2. Run the `make semconv-generate ...` target from this repository.
3. Run the `make semconv-generate ...` target from this repository.
For example, For example,
```sh ```sh
export TAG="v1.21.0" # Change to the release version you are generating. export TAG="v1.30.0" # Change to the release version you are generating.
export OTEL_SEMCONV_REPO="/absolute/path/to/opentelemetry/semantic-conventions" make semconv-generate # Uses the exported TAG.
docker pull otel/semconvgen:latest
make semconv-generate # Uses the exported TAG and OTEL_SEMCONV_REPO.
``` ```
This should create a new sub-package of [`semconv`](./semconv). This should create a new sub-package of [`semconv`](./semconv).

View File

@ -0,0 +1,3 @@
# This is a renovate-friendly source of Docker images.
FROM python:3.13.2-slim-bullseye@sha256:31b581c8218e1f3c58672481b3b7dba8e898852866b408c6a984c22832523935 AS python
FROM otel/weaver:v0.13.2@sha256:ae7346b992e477f629ea327e0979e8a416a97f7956ab1f7e95ac1f44edf1a893 AS weaver

View File

@ -1,7 +1,7 @@
{ {
"$schema": "https://docs.renovatebot.com/renovate-schema.json", "$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [ "extends": [
"config:recommended" "config:best-practices"
], ],
"ignorePaths": [], "ignorePaths": [],
"labels": ["Skip Changelog", "dependencies"], "labels": ["Skip Changelog", "dependencies"],
@ -14,6 +14,10 @@
"matchDepTypes": ["indirect"], "matchDepTypes": ["indirect"],
"enabled": true "enabled": true
}, },
{
"matchPackageNames": ["go.opentelemetry.io/build-tools/**"],
"groupName": "build-tools"
},
{ {
"matchPackageNames": ["google.golang.org/genproto/googleapis/**"], "matchPackageNames": ["google.golang.org/genproto/googleapis/**"],
"groupName": "googleapis" "groupName": "googleapis"

View File

@ -1 +1 @@
codespell==2.3.0 codespell==2.4.1

View File

@ -5,6 +5,7 @@ package resource // import "go.opentelemetry.io/otel/sdk/resource"
import ( import (
"encoding/xml" "encoding/xml"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@ -63,7 +64,7 @@ func parsePlistFile(file io.Reader) (map[string]string, error) {
} }
if len(v.Dict.Key) != len(v.Dict.String) { if len(v.Dict.Key) != len(v.Dict.String) {
return nil, fmt.Errorf("the number of <key> and <string> elements doesn't match") return nil, errors.New("the number of <key> and <string> elements doesn't match")
} }
properties := make(map[string]string, len(v.Dict.Key)) properties := make(map[string]string, len(v.Dict.Key))

View File

@ -21,11 +21,22 @@ import (
// Resources should be passed and stored as pointers // Resources should be passed and stored as pointers
// (`*resource.Resource`). The `nil` value is equivalent to an empty // (`*resource.Resource`). The `nil` value is equivalent to an empty
// Resource. // Resource.
//
// Note that the Go == operator compares not just the resource attributes but
// also all other internals of the Resource type. Therefore, Resource values
// should not be used as map or database keys. In general, the [Resource.Equal]
// method should be used instead of direct comparison with ==, since that
// method ensures the correct comparison of resource attributes, and the
// [attribute.Distinct] returned from [Resource.Equivalent] should be used for
// map and database keys instead.
type Resource struct { type Resource struct {
attrs attribute.Set attrs attribute.Set
schemaURL string schemaURL string
} }
// Compile-time check that the Resource remains comparable.
var _ map[Resource]struct{} = nil
var ( var (
defaultResource *Resource defaultResource *Resource
defaultResourceOnce sync.Once defaultResourceOnce sync.Once
@ -137,15 +148,19 @@ func (r *Resource) Iter() attribute.Iterator {
return r.attrs.Iter() return r.attrs.Iter()
} }
// Equal returns true when a Resource is equivalent to this Resource. // Equal returns whether r and o represent the same resource. Two resources can
func (r *Resource) Equal(eq *Resource) bool { // be equal even if they have different schema URLs.
//
// See the documentation on the [Resource] type for the pitfalls of using ==
// with Resource values; most code should use Equal instead.
func (r *Resource) Equal(o *Resource) bool {
if r == nil { if r == nil {
r = Empty() r = Empty()
} }
if eq == nil { if o == nil {
eq = Empty() o = Empty()
} }
return r.Equivalent() == eq.Equivalent() return r.Equivalent() == o.Equivalent()
} }
// Merge creates a new [Resource] by merging a and b. // Merge creates a new [Resource] by merging a and b.

View File

@ -201,10 +201,9 @@ func (bsp *batchSpanProcessor) ForceFlush(ctx context.Context) error {
} }
} }
wait := make(chan error) wait := make(chan error, 1)
go func() { go func() {
wait <- bsp.exportSpans(ctx) wait <- bsp.exportSpans(ctx)
close(wait)
}() }()
// Wait until the export is finished or the context is cancelled/timed out // Wait until the export is finished or the context is cancelled/timed out
select { select {

View File

@ -47,12 +47,12 @@ const (
// Drop will not record the span and all attributes/events will be dropped. // Drop will not record the span and all attributes/events will be dropped.
Drop SamplingDecision = iota Drop SamplingDecision = iota
// Record indicates the span's `IsRecording() == true`, but `Sampled` flag // RecordOnly indicates the span's IsRecording method returns true, but trace.FlagsSampled flag
// *must not* be set. // must not be set.
RecordOnly RecordOnly
// RecordAndSample has span's `IsRecording() == true` and `Sampled` flag // RecordAndSample indicates the span's IsRecording method returns true and trace.FlagsSampled flag
// *must* be set. // must be set.
RecordAndSample RecordAndSample
) )

View File

@ -58,7 +58,7 @@ func (ssp *simpleSpanProcessor) Shutdown(ctx context.Context) error {
var err error var err error
ssp.stopOnce.Do(func() { ssp.stopOnce.Do(func() {
stopFunc := func(exp SpanExporter) (<-chan error, func()) { stopFunc := func(exp SpanExporter) (<-chan error, func()) {
done := make(chan error) done := make(chan error, 1)
return done, func() { done <- exp.Shutdown(ctx) } return done, func() { done <- exp.Shutdown(ctx) }
} }

View File

@ -5,5 +5,5 @@ package sdk // import "go.opentelemetry.io/otel/sdk"
// Version is the current release version of the OpenTelemetry SDK in use. // Version is the current release version of the OpenTelemetry SDK in use.
func Version() string { func Version() string {
return "1.34.0" return "1.35.0"
} }

661
vendor/go.opentelemetry.io/otel/trace/auto.go generated vendored Normal file
View File

@ -0,0 +1,661 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package trace // import "go.opentelemetry.io/otel/trace"
import (
"context"
"encoding/json"
"fmt"
"math"
"os"
"reflect"
"runtime"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"unicode/utf8"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"go.opentelemetry.io/otel/trace/embedded"
"go.opentelemetry.io/otel/trace/internal/telemetry"
)
// newAutoTracerProvider returns an auto-instrumentable [trace.TracerProvider].
// If an [go.opentelemetry.io/auto.Instrumentation] is configured to instrument
// the process using the returned TracerProvider, all of the telemetry it
// produces will be processed and handled by that Instrumentation. By default,
// if no Instrumentation instruments the TracerProvider it will not generate
// any trace telemetry.
func newAutoTracerProvider() TracerProvider { return tracerProviderInstance }
var tracerProviderInstance = new(autoTracerProvider)
type autoTracerProvider struct{ embedded.TracerProvider }
var _ TracerProvider = autoTracerProvider{}
func (p autoTracerProvider) Tracer(name string, opts ...TracerOption) Tracer {
cfg := NewTracerConfig(opts...)
return autoTracer{
name: name,
version: cfg.InstrumentationVersion(),
schemaURL: cfg.SchemaURL(),
}
}
type autoTracer struct {
embedded.Tracer
name, schemaURL, version string
}
var _ Tracer = autoTracer{}
func (t autoTracer) Start(ctx context.Context, name string, opts ...SpanStartOption) (context.Context, Span) {
var psc SpanContext
sampled := true
span := new(autoSpan)
// Ask eBPF for sampling decision and span context info.
t.start(ctx, span, &psc, &sampled, &span.spanContext)
span.sampled.Store(sampled)
ctx = ContextWithSpan(ctx, span)
if sampled {
// Only build traces if sampled.
cfg := NewSpanStartConfig(opts...)
span.traces, span.span = t.traces(name, cfg, span.spanContext, psc)
}
return ctx, span
}
// Expected to be implemented in eBPF.
//
//go:noinline
func (t *autoTracer) start(
ctx context.Context,
spanPtr *autoSpan,
psc *SpanContext,
sampled *bool,
sc *SpanContext,
) {
start(ctx, spanPtr, psc, sampled, sc)
}
// start is used for testing.
var start = func(context.Context, *autoSpan, *SpanContext, *bool, *SpanContext) {}
func (t autoTracer) traces(name string, cfg SpanConfig, sc, psc SpanContext) (*telemetry.Traces, *telemetry.Span) {
span := &telemetry.Span{
TraceID: telemetry.TraceID(sc.TraceID()),
SpanID: telemetry.SpanID(sc.SpanID()),
Flags: uint32(sc.TraceFlags()),
TraceState: sc.TraceState().String(),
ParentSpanID: telemetry.SpanID(psc.SpanID()),
Name: name,
Kind: spanKind(cfg.SpanKind()),
}
span.Attrs, span.DroppedAttrs = convCappedAttrs(maxSpan.Attrs, cfg.Attributes())
links := cfg.Links()
if limit := maxSpan.Links; limit == 0 {
n := int64(len(links))
if n > 0 {
span.DroppedLinks = uint32(min(n, math.MaxUint32)) // nolint: gosec // Bounds checked.
}
} else {
if limit > 0 {
n := int64(max(len(links)-limit, 0))
span.DroppedLinks = uint32(min(n, math.MaxUint32)) // nolint: gosec // Bounds checked.
links = links[n:]
}
span.Links = convLinks(links)
}
if t := cfg.Timestamp(); !t.IsZero() {
span.StartTime = cfg.Timestamp()
} else {
span.StartTime = time.Now()
}
return &telemetry.Traces{
ResourceSpans: []*telemetry.ResourceSpans{
{
ScopeSpans: []*telemetry.ScopeSpans{
{
Scope: &telemetry.Scope{
Name: t.name,
Version: t.version,
},
Spans: []*telemetry.Span{span},
SchemaURL: t.schemaURL,
},
},
},
},
}, span
}
func spanKind(kind SpanKind) telemetry.SpanKind {
switch kind {
case SpanKindInternal:
return telemetry.SpanKindInternal
case SpanKindServer:
return telemetry.SpanKindServer
case SpanKindClient:
return telemetry.SpanKindClient
case SpanKindProducer:
return telemetry.SpanKindProducer
case SpanKindConsumer:
return telemetry.SpanKindConsumer
}
return telemetry.SpanKind(0) // undefined.
}
type autoSpan struct {
embedded.Span
spanContext SpanContext
sampled atomic.Bool
mu sync.Mutex
traces *telemetry.Traces
span *telemetry.Span
}
func (s *autoSpan) SpanContext() SpanContext {
if s == nil {
return SpanContext{}
}
// s.spanContext is immutable, do not acquire lock s.mu.
return s.spanContext
}
func (s *autoSpan) IsRecording() bool {
if s == nil {
return false
}
return s.sampled.Load()
}
func (s *autoSpan) SetStatus(c codes.Code, msg string) {
if s == nil || !s.sampled.Load() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
if s.span.Status == nil {
s.span.Status = new(telemetry.Status)
}
s.span.Status.Message = msg
switch c {
case codes.Unset:
s.span.Status.Code = telemetry.StatusCodeUnset
case codes.Error:
s.span.Status.Code = telemetry.StatusCodeError
case codes.Ok:
s.span.Status.Code = telemetry.StatusCodeOK
}
}
func (s *autoSpan) SetAttributes(attrs ...attribute.KeyValue) {
if s == nil || !s.sampled.Load() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
limit := maxSpan.Attrs
if limit == 0 {
// No attributes allowed.
n := int64(len(attrs))
if n > 0 {
s.span.DroppedAttrs += uint32(min(n, math.MaxUint32)) // nolint: gosec // Bounds checked.
}
return
}
m := make(map[string]int)
for i, a := range s.span.Attrs {
m[a.Key] = i
}
for _, a := range attrs {
val := convAttrValue(a.Value)
if val.Empty() {
s.span.DroppedAttrs++
continue
}
if idx, ok := m[string(a.Key)]; ok {
s.span.Attrs[idx] = telemetry.Attr{
Key: string(a.Key),
Value: val,
}
} else if limit < 0 || len(s.span.Attrs) < limit {
s.span.Attrs = append(s.span.Attrs, telemetry.Attr{
Key: string(a.Key),
Value: val,
})
m[string(a.Key)] = len(s.span.Attrs) - 1
} else {
s.span.DroppedAttrs++
}
}
}
// convCappedAttrs converts up to limit attrs into a []telemetry.Attr. The
// number of dropped attributes is also returned.
func convCappedAttrs(limit int, attrs []attribute.KeyValue) ([]telemetry.Attr, uint32) {
n := len(attrs)
if limit == 0 {
var out uint32
if n > 0 {
out = uint32(min(int64(n), math.MaxUint32)) // nolint: gosec // Bounds checked.
}
return nil, out
}
if limit < 0 {
// Unlimited.
return convAttrs(attrs), 0
}
if n < 0 {
n = 0
}
limit = min(n, limit)
return convAttrs(attrs[:limit]), uint32(n - limit) // nolint: gosec // Bounds checked.
}
func convAttrs(attrs []attribute.KeyValue) []telemetry.Attr {
if len(attrs) == 0 {
// Avoid allocations if not necessary.
return nil
}
out := make([]telemetry.Attr, 0, len(attrs))
for _, attr := range attrs {
key := string(attr.Key)
val := convAttrValue(attr.Value)
if val.Empty() {
continue
}
out = append(out, telemetry.Attr{Key: key, Value: val})
}
return out
}
func convAttrValue(value attribute.Value) telemetry.Value {
switch value.Type() {
case attribute.BOOL:
return telemetry.BoolValue(value.AsBool())
case attribute.INT64:
return telemetry.Int64Value(value.AsInt64())
case attribute.FLOAT64:
return telemetry.Float64Value(value.AsFloat64())
case attribute.STRING:
v := truncate(maxSpan.AttrValueLen, value.AsString())
return telemetry.StringValue(v)
case attribute.BOOLSLICE:
slice := value.AsBoolSlice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
out = append(out, telemetry.BoolValue(v))
}
return telemetry.SliceValue(out...)
case attribute.INT64SLICE:
slice := value.AsInt64Slice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
out = append(out, telemetry.Int64Value(v))
}
return telemetry.SliceValue(out...)
case attribute.FLOAT64SLICE:
slice := value.AsFloat64Slice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
out = append(out, telemetry.Float64Value(v))
}
return telemetry.SliceValue(out...)
case attribute.STRINGSLICE:
slice := value.AsStringSlice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
v = truncate(maxSpan.AttrValueLen, v)
out = append(out, telemetry.StringValue(v))
}
return telemetry.SliceValue(out...)
}
return telemetry.Value{}
}
// truncate returns a truncated version of s such that it contains less than
// the limit number of characters. Truncation is applied by returning the limit
// number of valid characters contained in s.
//
// If limit is negative, it returns the original string.
//
// UTF-8 is supported. When truncating, all invalid characters are dropped
// before applying truncation.
//
// If s already contains less than the limit number of bytes, it is returned
// unchanged. No invalid characters are removed.
func truncate(limit int, s string) string {
// This prioritize performance in the following order based on the most
// common expected use-cases.
//
// - Short values less than the default limit (128).
// - Strings with valid encodings that exceed the limit.
// - No limit.
// - Strings with invalid encodings that exceed the limit.
if limit < 0 || len(s) <= limit {
return s
}
// Optimistically, assume all valid UTF-8.
var b strings.Builder
count := 0
for i, c := range s {
if c != utf8.RuneError {
count++
if count > limit {
return s[:i]
}
continue
}
_, size := utf8.DecodeRuneInString(s[i:])
if size == 1 {
// Invalid encoding.
b.Grow(len(s) - 1)
_, _ = b.WriteString(s[:i])
s = s[i:]
break
}
}
// Fast-path, no invalid input.
if b.Cap() == 0 {
return s
}
// Truncate while validating UTF-8.
for i := 0; i < len(s) && count < limit; {
c := s[i]
if c < utf8.RuneSelf {
// Optimization for single byte runes (common case).
_ = b.WriteByte(c)
i++
count++
continue
}
_, size := utf8.DecodeRuneInString(s[i:])
if size == 1 {
// We checked for all 1-byte runes above, this is a RuneError.
i++
continue
}
_, _ = b.WriteString(s[i : i+size])
i += size
count++
}
return b.String()
}
func (s *autoSpan) End(opts ...SpanEndOption) {
if s == nil || !s.sampled.Swap(false) {
return
}
// s.end exists so the lock (s.mu) is not held while s.ended is called.
s.ended(s.end(opts))
}
func (s *autoSpan) end(opts []SpanEndOption) []byte {
s.mu.Lock()
defer s.mu.Unlock()
cfg := NewSpanEndConfig(opts...)
if t := cfg.Timestamp(); !t.IsZero() {
s.span.EndTime = cfg.Timestamp()
} else {
s.span.EndTime = time.Now()
}
b, _ := json.Marshal(s.traces) // TODO: do not ignore this error.
return b
}
// Expected to be implemented in eBPF.
//
//go:noinline
func (*autoSpan) ended(buf []byte) { ended(buf) }
// ended is used for testing.
var ended = func([]byte) {}
func (s *autoSpan) RecordError(err error, opts ...EventOption) {
if s == nil || err == nil || !s.sampled.Load() {
return
}
cfg := NewEventConfig(opts...)
attrs := cfg.Attributes()
attrs = append(attrs,
semconv.ExceptionType(typeStr(err)),
semconv.ExceptionMessage(err.Error()),
)
if cfg.StackTrace() {
buf := make([]byte, 2048)
n := runtime.Stack(buf, false)
attrs = append(attrs, semconv.ExceptionStacktrace(string(buf[0:n])))
}
s.mu.Lock()
defer s.mu.Unlock()
s.addEvent(semconv.ExceptionEventName, cfg.Timestamp(), attrs)
}
func typeStr(i any) string {
t := reflect.TypeOf(i)
if t.PkgPath() == "" && t.Name() == "" {
// Likely a builtin type.
return t.String()
}
return fmt.Sprintf("%s.%s", t.PkgPath(), t.Name())
}
func (s *autoSpan) AddEvent(name string, opts ...EventOption) {
if s == nil || !s.sampled.Load() {
return
}
cfg := NewEventConfig(opts...)
s.mu.Lock()
defer s.mu.Unlock()
s.addEvent(name, cfg.Timestamp(), cfg.Attributes())
}
// addEvent adds an event with name and attrs at tStamp to the span. The span
// lock (s.mu) needs to be held by the caller.
func (s *autoSpan) addEvent(name string, tStamp time.Time, attrs []attribute.KeyValue) {
limit := maxSpan.Events
if limit == 0 {
s.span.DroppedEvents++
return
}
if limit > 0 && len(s.span.Events) == limit {
// Drop head while avoiding allocation of more capacity.
copy(s.span.Events[:limit-1], s.span.Events[1:])
s.span.Events = s.span.Events[:limit-1]
s.span.DroppedEvents++
}
e := &telemetry.SpanEvent{Time: tStamp, Name: name}
e.Attrs, e.DroppedAttrs = convCappedAttrs(maxSpan.EventAttrs, attrs)
s.span.Events = append(s.span.Events, e)
}
func (s *autoSpan) AddLink(link Link) {
if s == nil || !s.sampled.Load() {
return
}
l := maxSpan.Links
s.mu.Lock()
defer s.mu.Unlock()
if l == 0 {
s.span.DroppedLinks++
return
}
if l > 0 && len(s.span.Links) == l {
// Drop head while avoiding allocation of more capacity.
copy(s.span.Links[:l-1], s.span.Links[1:])
s.span.Links = s.span.Links[:l-1]
s.span.DroppedLinks++
}
s.span.Links = append(s.span.Links, convLink(link))
}
func convLinks(links []Link) []*telemetry.SpanLink {
out := make([]*telemetry.SpanLink, 0, len(links))
for _, link := range links {
out = append(out, convLink(link))
}
return out
}
func convLink(link Link) *telemetry.SpanLink {
l := &telemetry.SpanLink{
TraceID: telemetry.TraceID(link.SpanContext.TraceID()),
SpanID: telemetry.SpanID(link.SpanContext.SpanID()),
TraceState: link.SpanContext.TraceState().String(),
Flags: uint32(link.SpanContext.TraceFlags()),
}
l.Attrs, l.DroppedAttrs = convCappedAttrs(maxSpan.LinkAttrs, link.Attributes)
return l
}
func (s *autoSpan) SetName(name string) {
if s == nil || !s.sampled.Load() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
s.span.Name = name
}
func (*autoSpan) TracerProvider() TracerProvider { return newAutoTracerProvider() }
// maxSpan are the span limits resolved during startup.
var maxSpan = newSpanLimits()
type spanLimits struct {
// Attrs is the number of allowed attributes for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_COUNT_LIMIT, or 128 if
// that is not set, is used.
Attrs int
// AttrValueLen is the maximum attribute value length allowed for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT, or -1
// if that is not set, is used.
AttrValueLen int
// Events is the number of allowed events for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_EVENT_COUNT_LIMIT key, or 128 is used if that is not set.
Events int
// EventAttrs is the number of allowed attributes for a span event.
//
// The is resolved from the environment variable value for the
// OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT key, or 128 is used if that is not set.
EventAttrs int
// Links is the number of allowed Links for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_LINK_COUNT_LIMIT, or 128 is used if that is not set.
Links int
// LinkAttrs is the number of allowed attributes for a span link.
//
// This is resolved from the environment variable value for the
// OTEL_LINK_ATTRIBUTE_COUNT_LIMIT, or 128 is used if that is not set.
LinkAttrs int
}
func newSpanLimits() spanLimits {
return spanLimits{
Attrs: firstEnv(
128,
"OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT",
"OTEL_ATTRIBUTE_COUNT_LIMIT",
),
AttrValueLen: firstEnv(
-1, // Unlimited.
"OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT",
"OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT",
),
Events: firstEnv(128, "OTEL_SPAN_EVENT_COUNT_LIMIT"),
EventAttrs: firstEnv(128, "OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT"),
Links: firstEnv(128, "OTEL_SPAN_LINK_COUNT_LIMIT"),
LinkAttrs: firstEnv(128, "OTEL_LINK_ATTRIBUTE_COUNT_LIMIT"),
}
}
// firstEnv returns the parsed integer value of the first matching environment
// variable from keys. The defaultVal is returned if the value is not an
// integer or no match is found.
func firstEnv(defaultVal int, keys ...string) int {
for _, key := range keys {
strV := os.Getenv(key)
if strV == "" {
continue
}
v, err := strconv.Atoi(strV)
if err == nil {
return v
}
// Ignore invalid environment variable.
}
return defaultVal
}

View File

@ -0,0 +1,58 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
// Attr is a key-value pair.
type Attr struct {
Key string `json:"key,omitempty"`
Value Value `json:"value,omitempty"`
}
// String returns an Attr for a string value.
func String(key, value string) Attr {
return Attr{key, StringValue(value)}
}
// Int64 returns an Attr for an int64 value.
func Int64(key string, value int64) Attr {
return Attr{key, Int64Value(value)}
}
// Int returns an Attr for an int value.
func Int(key string, value int) Attr {
return Int64(key, int64(value))
}
// Float64 returns an Attr for a float64 value.
func Float64(key string, value float64) Attr {
return Attr{key, Float64Value(value)}
}
// Bool returns an Attr for a bool value.
func Bool(key string, value bool) Attr {
return Attr{key, BoolValue(value)}
}
// Bytes returns an Attr for a []byte value.
// The passed slice must not be changed after it is passed.
func Bytes(key string, value []byte) Attr {
return Attr{key, BytesValue(value)}
}
// Slice returns an Attr for a []Value value.
// The passed slice must not be changed after it is passed.
func Slice(key string, value ...Value) Attr {
return Attr{key, SliceValue(value...)}
}
// Map returns an Attr for a map value.
// The passed slice must not be changed after it is passed.
func Map(key string, value ...Attr) Attr {
return Attr{key, MapValue(value...)}
}
// Equal returns if a is equal to b.
func (a Attr) Equal(b Attr) bool {
return a.Key == b.Key && a.Value.Equal(b.Value)
}

View File

@ -0,0 +1,8 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package telemetry provides a lightweight representations of OpenTelemetry
telemetry that is compatible with the OTLP JSON protobuf encoding.
*/
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"

View File

@ -0,0 +1,103 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"encoding/hex"
"errors"
"fmt"
)
const (
traceIDSize = 16
spanIDSize = 8
)
// TraceID is a custom data type that is used for all trace IDs.
type TraceID [traceIDSize]byte
// String returns the hex string representation form of a TraceID.
func (tid TraceID) String() string {
return hex.EncodeToString(tid[:])
}
// IsEmpty returns false if id contains at least one non-zero byte.
func (tid TraceID) IsEmpty() bool {
return tid == [traceIDSize]byte{}
}
// MarshalJSON converts the trace ID into a hex string enclosed in quotes.
func (tid TraceID) MarshalJSON() ([]byte, error) {
if tid.IsEmpty() {
return []byte(`""`), nil
}
return marshalJSON(tid[:])
}
// UnmarshalJSON inflates the trace ID from hex string, possibly enclosed in
// quotes.
func (tid *TraceID) UnmarshalJSON(data []byte) error {
*tid = [traceIDSize]byte{}
return unmarshalJSON(tid[:], data)
}
// SpanID is a custom data type that is used for all span IDs.
type SpanID [spanIDSize]byte
// String returns the hex string representation form of a SpanID.
func (sid SpanID) String() string {
return hex.EncodeToString(sid[:])
}
// IsEmpty returns true if the span ID contains at least one non-zero byte.
func (sid SpanID) IsEmpty() bool {
return sid == [spanIDSize]byte{}
}
// MarshalJSON converts span ID into a hex string enclosed in quotes.
func (sid SpanID) MarshalJSON() ([]byte, error) {
if sid.IsEmpty() {
return []byte(`""`), nil
}
return marshalJSON(sid[:])
}
// UnmarshalJSON decodes span ID from hex string, possibly enclosed in quotes.
func (sid *SpanID) UnmarshalJSON(data []byte) error {
*sid = [spanIDSize]byte{}
return unmarshalJSON(sid[:], data)
}
// marshalJSON converts id into a hex string enclosed in quotes.
func marshalJSON(id []byte) ([]byte, error) {
// Plus 2 quote chars at the start and end.
hexLen := hex.EncodedLen(len(id)) + 2
b := make([]byte, hexLen)
hex.Encode(b[1:hexLen-1], id)
b[0], b[hexLen-1] = '"', '"'
return b, nil
}
// unmarshalJSON inflates trace id from hex string, possibly enclosed in quotes.
func unmarshalJSON(dst []byte, src []byte) error {
if l := len(src); l >= 2 && src[0] == '"' && src[l-1] == '"' {
src = src[1 : l-1]
}
nLen := len(src)
if nLen == 0 {
return nil
}
if len(dst) != hex.DecodedLen(nLen) {
return errors.New("invalid length for ID")
}
_, err := hex.Decode(dst, src)
if err != nil {
return fmt.Errorf("cannot unmarshal ID from string '%s': %w", string(src), err)
}
return nil
}

View File

@ -0,0 +1,67 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"encoding/json"
"strconv"
)
// protoInt64 represents the protobuf encoding of integers which can be either
// strings or integers.
type protoInt64 int64
// Int64 returns the protoInt64 as an int64.
func (i *protoInt64) Int64() int64 { return int64(*i) }
// UnmarshalJSON decodes both strings and integers.
func (i *protoInt64) UnmarshalJSON(data []byte) error {
if data[0] == '"' {
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
parsedInt, err := strconv.ParseInt(str, 10, 64)
if err != nil {
return err
}
*i = protoInt64(parsedInt)
} else {
var parsedInt int64
if err := json.Unmarshal(data, &parsedInt); err != nil {
return err
}
*i = protoInt64(parsedInt)
}
return nil
}
// protoUint64 represents the protobuf encoding of integers which can be either
// strings or integers.
type protoUint64 uint64
// Int64 returns the protoUint64 as a uint64.
func (i *protoUint64) Uint64() uint64 { return uint64(*i) }
// UnmarshalJSON decodes both strings and integers.
func (i *protoUint64) UnmarshalJSON(data []byte) error {
if data[0] == '"' {
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
parsedUint, err := strconv.ParseUint(str, 10, 64)
if err != nil {
return err
}
*i = protoUint64(parsedUint)
} else {
var parsedUint uint64
if err := json.Unmarshal(data, &parsedUint); err != nil {
return err
}
*i = protoUint64(parsedUint)
}
return nil
}

View File

@ -0,0 +1,66 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
)
// Resource information.
type Resource struct {
// Attrs are the set of attributes that describe the resource. Attribute
// keys MUST be unique (it is not allowed to have more than one attribute
// with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// DroppedAttrs is the number of dropped attributes. If the value
// is 0, then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into r.
func (r *Resource) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Resource type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Resource field: %#v", keyIface)
}
switch key {
case "attributes":
err = decoder.Decode(&r.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&r.DroppedAttrs)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,67 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
)
// Scope is the identifying values of the instrumentation scope.
type Scope struct {
Name string `json:"name,omitempty"`
Version string `json:"version,omitempty"`
Attrs []Attr `json:"attributes,omitempty"`
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into r.
func (s *Scope) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Scope type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Scope field: %#v", keyIface)
}
switch key {
case "name":
err = decoder.Decode(&s.Name)
case "version":
err = decoder.Decode(&s.Version)
case "attributes":
err = decoder.Decode(&s.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&s.DroppedAttrs)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,460 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"bytes"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"math"
"time"
)
// A Span represents a single operation performed by a single component of the
// system.
type Span struct {
// A unique identifier for a trace. All spans from the same trace share
// the same `trace_id`. The ID is a 16-byte array. An ID with all zeroes OR
// of length other than 16 bytes is considered invalid (empty string in OTLP/JSON
// is zero-length and thus is also invalid).
//
// This field is required.
TraceID TraceID `json:"traceId,omitempty"`
// A unique identifier for a span within a trace, assigned when the span
// is created. The ID is an 8-byte array. An ID with all zeroes OR of length
// other than 8 bytes is considered invalid (empty string in OTLP/JSON
// is zero-length and thus is also invalid).
//
// This field is required.
SpanID SpanID `json:"spanId,omitempty"`
// trace_state conveys information about request position in multiple distributed tracing graphs.
// It is a trace_state in w3c-trace-context format: https://www.w3.org/TR/trace-context/#tracestate-header
// See also https://github.com/w3c/distributed-tracing for more details about this field.
TraceState string `json:"traceState,omitempty"`
// The `span_id` of this span's parent span. If this is a root span, then this
// field must be empty. The ID is an 8-byte array.
ParentSpanID SpanID `json:"parentSpanId,omitempty"`
// Flags, a bit field.
//
// Bits 0-7 (8 least significant bits) are the trace flags as defined in W3C Trace
// Context specification. To read the 8-bit W3C trace flag, use
// `flags & SPAN_FLAGS_TRACE_FLAGS_MASK`.
//
// See https://www.w3.org/TR/trace-context-2/#trace-flags for the flag definitions.
//
// Bits 8 and 9 represent the 3 states of whether a span's parent
// is remote. The states are (unknown, is not remote, is remote).
// To read whether the value is known, use `(flags & SPAN_FLAGS_CONTEXT_HAS_IS_REMOTE_MASK) != 0`.
// To read whether the span is remote, use `(flags & SPAN_FLAGS_CONTEXT_IS_REMOTE_MASK) != 0`.
//
// When creating span messages, if the message is logically forwarded from another source
// with an equivalent flags fields (i.e., usually another OTLP span message), the field SHOULD
// be copied as-is. If creating from a source that does not have an equivalent flags field
// (such as a runtime representation of an OpenTelemetry span), the high 22 bits MUST
// be set to zero.
// Readers MUST NOT assume that bits 10-31 (22 most significant bits) will be zero.
//
// [Optional].
Flags uint32 `json:"flags,omitempty"`
// A description of the span's operation.
//
// For example, the name can be a qualified method name or a file name
// and a line number where the operation is called. A best practice is to use
// the same display name at the same call point in an application.
// This makes it easier to correlate spans in different traces.
//
// This field is semantically required to be set to non-empty string.
// Empty value is equivalent to an unknown span name.
//
// This field is required.
Name string `json:"name"`
// Distinguishes between spans generated in a particular context. For example,
// two spans with the same name may be distinguished using `CLIENT` (caller)
// and `SERVER` (callee) to identify queueing latency associated with the span.
Kind SpanKind `json:"kind,omitempty"`
// start_time_unix_nano is the start time of the span. On the client side, this is the time
// kept by the local machine where the span execution starts. On the server side, this
// is the time when the server's application handler starts running.
// Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
//
// This field is semantically required and it is expected that end_time >= start_time.
StartTime time.Time `json:"startTimeUnixNano,omitempty"`
// end_time_unix_nano is the end time of the span. On the client side, this is the time
// kept by the local machine where the span execution ends. On the server side, this
// is the time when the server application handler stops running.
// Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
//
// This field is semantically required and it is expected that end_time >= start_time.
EndTime time.Time `json:"endTimeUnixNano,omitempty"`
// attributes is a collection of key/value pairs. Note, global attributes
// like server name can be set using the resource API. Examples of attributes:
//
// "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
// "/http/server_latency": 300
// "example.com/myattribute": true
// "example.com/score": 10.239
//
// The OpenTelemetry API specification further restricts the allowed value types:
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md#attribute
// Attribute keys MUST be unique (it is not allowed to have more than one
// attribute with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// dropped_attributes_count is the number of attributes that were discarded. Attributes
// can be discarded because their keys are too long or because there are too many
// attributes. If this value is 0, then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
// events is a collection of Event items.
Events []*SpanEvent `json:"events,omitempty"`
// dropped_events_count is the number of dropped events. If the value is 0, then no
// events were dropped.
DroppedEvents uint32 `json:"droppedEventsCount,omitempty"`
// links is a collection of Links, which are references from this span to a span
// in the same or different trace.
Links []*SpanLink `json:"links,omitempty"`
// dropped_links_count is the number of dropped links after the maximum size was
// enforced. If this value is 0, then no links were dropped.
DroppedLinks uint32 `json:"droppedLinksCount,omitempty"`
// An optional final status for this span. Semantically when Status isn't set, it means
// span's status code is unset, i.e. assume STATUS_CODE_UNSET (code = 0).
Status *Status `json:"status,omitempty"`
}
// MarshalJSON encodes s into OTLP formatted JSON.
func (s Span) MarshalJSON() ([]byte, error) {
startT := s.StartTime.UnixNano()
if s.StartTime.IsZero() || startT < 0 {
startT = 0
}
endT := s.EndTime.UnixNano()
if s.EndTime.IsZero() || endT < 0 {
endT = 0
}
// Override non-empty default SpanID marshal and omitempty.
var parentSpanId string
if !s.ParentSpanID.IsEmpty() {
b := make([]byte, hex.EncodedLen(spanIDSize))
hex.Encode(b, s.ParentSpanID[:])
parentSpanId = string(b)
}
type Alias Span
return json.Marshal(struct {
Alias
ParentSpanID string `json:"parentSpanId,omitempty"`
StartTime uint64 `json:"startTimeUnixNano,omitempty"`
EndTime uint64 `json:"endTimeUnixNano,omitempty"`
}{
Alias: Alias(s),
ParentSpanID: parentSpanId,
StartTime: uint64(startT), // nolint:gosec // >0 checked above.
EndTime: uint64(endT), // nolint:gosec // >0 checked above.
})
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into s.
func (s *Span) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Span type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Span field: %#v", keyIface)
}
switch key {
case "traceId", "trace_id":
err = decoder.Decode(&s.TraceID)
case "spanId", "span_id":
err = decoder.Decode(&s.SpanID)
case "traceState", "trace_state":
err = decoder.Decode(&s.TraceState)
case "parentSpanId", "parent_span_id":
err = decoder.Decode(&s.ParentSpanID)
case "flags":
err = decoder.Decode(&s.Flags)
case "name":
err = decoder.Decode(&s.Name)
case "kind":
err = decoder.Decode(&s.Kind)
case "startTimeUnixNano", "start_time_unix_nano":
var val protoUint64
err = decoder.Decode(&val)
v := int64(min(val.Uint64(), math.MaxInt64)) // nolint: gosec // Overflow checked.
s.StartTime = time.Unix(0, v)
case "endTimeUnixNano", "end_time_unix_nano":
var val protoUint64
err = decoder.Decode(&val)
v := int64(min(val.Uint64(), math.MaxInt64)) // nolint: gosec // Overflow checked.
s.EndTime = time.Unix(0, v)
case "attributes":
err = decoder.Decode(&s.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&s.DroppedAttrs)
case "events":
err = decoder.Decode(&s.Events)
case "droppedEventsCount", "dropped_events_count":
err = decoder.Decode(&s.DroppedEvents)
case "links":
err = decoder.Decode(&s.Links)
case "droppedLinksCount", "dropped_links_count":
err = decoder.Decode(&s.DroppedLinks)
case "status":
err = decoder.Decode(&s.Status)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// SpanFlags represents constants used to interpret the
// Span.flags field, which is protobuf 'fixed32' type and is to
// be used as bit-fields. Each non-zero value defined in this enum is
// a bit-mask. To extract the bit-field, for example, use an
// expression like:
//
// (span.flags & SPAN_FLAGS_TRACE_FLAGS_MASK)
//
// See https://www.w3.org/TR/trace-context-2/#trace-flags for the flag definitions.
//
// Note that Span flags were introduced in version 1.1 of the
// OpenTelemetry protocol. Older Span producers do not set this
// field, consequently consumers should not rely on the absence of a
// particular flag bit to indicate the presence of a particular feature.
type SpanFlags int32
const (
// Bits 0-7 are used for trace flags.
SpanFlagsTraceFlagsMask SpanFlags = 255
// Bits 8 and 9 are used to indicate that the parent span or link span is remote.
// Bit 8 (`HAS_IS_REMOTE`) indicates whether the value is known.
// Bit 9 (`IS_REMOTE`) indicates whether the span or link is remote.
SpanFlagsContextHasIsRemoteMask SpanFlags = 256
// SpanFlagsContextHasIsRemoteMask indicates the Span is remote.
SpanFlagsContextIsRemoteMask SpanFlags = 512
)
// SpanKind is the type of span. Can be used to specify additional relationships between spans
// in addition to a parent/child relationship.
type SpanKind int32
const (
// Indicates that the span represents an internal operation within an application,
// as opposed to an operation happening at the boundaries. Default value.
SpanKindInternal SpanKind = 1
// Indicates that the span covers server-side handling of an RPC or other
// remote network request.
SpanKindServer SpanKind = 2
// Indicates that the span describes a request to some remote service.
SpanKindClient SpanKind = 3
// Indicates that the span describes a producer sending a message to a broker.
// Unlike CLIENT and SERVER, there is often no direct critical path latency relationship
// between producer and consumer spans. A PRODUCER span ends when the message was accepted
// by the broker while the logical processing of the message might span a much longer time.
SpanKindProducer SpanKind = 4
// Indicates that the span describes consumer receiving a message from a broker.
// Like the PRODUCER kind, there is often no direct critical path latency relationship
// between producer and consumer spans.
SpanKindConsumer SpanKind = 5
)
// Event is a time-stamped annotation of the span, consisting of user-supplied
// text description and key-value pairs.
type SpanEvent struct {
// time_unix_nano is the time the event occurred.
Time time.Time `json:"timeUnixNano,omitempty"`
// name of the event.
// This field is semantically required to be set to non-empty string.
Name string `json:"name,omitempty"`
// attributes is a collection of attribute key/value pairs on the event.
// Attribute keys MUST be unique (it is not allowed to have more than one
// attribute with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// dropped_attributes_count is the number of dropped attributes. If the value is 0,
// then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
}
// MarshalJSON encodes e into OTLP formatted JSON.
func (e SpanEvent) MarshalJSON() ([]byte, error) {
t := e.Time.UnixNano()
if e.Time.IsZero() || t < 0 {
t = 0
}
type Alias SpanEvent
return json.Marshal(struct {
Alias
Time uint64 `json:"timeUnixNano,omitempty"`
}{
Alias: Alias(e),
Time: uint64(t), // nolint: gosec // >0 checked above
})
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into se.
func (se *SpanEvent) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid SpanEvent type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid SpanEvent field: %#v", keyIface)
}
switch key {
case "timeUnixNano", "time_unix_nano":
var val protoUint64
err = decoder.Decode(&val)
v := int64(min(val.Uint64(), math.MaxInt64)) // nolint: gosec // Overflow checked.
se.Time = time.Unix(0, v)
case "name":
err = decoder.Decode(&se.Name)
case "attributes":
err = decoder.Decode(&se.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&se.DroppedAttrs)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// A pointer from the current span to another span in the same trace or in a
// different trace. For example, this can be used in batching operations,
// where a single batch handler processes multiple requests from different
// traces or when the handler receives a request from a different project.
type SpanLink struct {
// A unique identifier of a trace that this linked span is part of. The ID is a
// 16-byte array.
TraceID TraceID `json:"traceId,omitempty"`
// A unique identifier for the linked span. The ID is an 8-byte array.
SpanID SpanID `json:"spanId,omitempty"`
// The trace_state associated with the link.
TraceState string `json:"traceState,omitempty"`
// attributes is a collection of attribute key/value pairs on the link.
// Attribute keys MUST be unique (it is not allowed to have more than one
// attribute with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// dropped_attributes_count is the number of dropped attributes. If the value is 0,
// then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
// Flags, a bit field.
//
// Bits 0-7 (8 least significant bits) are the trace flags as defined in W3C Trace
// Context specification. To read the 8-bit W3C trace flag, use
// `flags & SPAN_FLAGS_TRACE_FLAGS_MASK`.
//
// See https://www.w3.org/TR/trace-context-2/#trace-flags for the flag definitions.
//
// Bits 8 and 9 represent the 3 states of whether the link is remote.
// The states are (unknown, is not remote, is remote).
// To read whether the value is known, use `(flags & SPAN_FLAGS_CONTEXT_HAS_IS_REMOTE_MASK) != 0`.
// To read whether the link is remote, use `(flags & SPAN_FLAGS_CONTEXT_IS_REMOTE_MASK) != 0`.
//
// Readers MUST NOT assume that bits 10-31 (22 most significant bits) will be zero.
// When creating new spans, bits 10-31 (most-significant 22-bits) MUST be zero.
//
// [Optional].
Flags uint32 `json:"flags,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into sl.
func (sl *SpanLink) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid SpanLink type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid SpanLink field: %#v", keyIface)
}
switch key {
case "traceId", "trace_id":
err = decoder.Decode(&sl.TraceID)
case "spanId", "span_id":
err = decoder.Decode(&sl.SpanID)
case "traceState", "trace_state":
err = decoder.Decode(&sl.TraceState)
case "attributes":
err = decoder.Decode(&sl.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&sl.DroppedAttrs)
case "flags":
err = decoder.Decode(&sl.Flags)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,40 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
// For the semantics of status codes see
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#set-status
type StatusCode int32
const (
// The default status.
StatusCodeUnset StatusCode = 0
// The Span has been validated by an Application developer or Operator to
// have completed successfully.
StatusCodeOK StatusCode = 1
// The Span contains an error.
StatusCodeError StatusCode = 2
)
var statusCodeStrings = []string{
"Unset",
"OK",
"Error",
}
func (s StatusCode) String() string {
if s >= 0 && int(s) < len(statusCodeStrings) {
return statusCodeStrings[s]
}
return "<unknown telemetry.StatusCode>"
}
// The Status type defines a logical error model that is suitable for different
// programming environments, including REST APIs and RPC APIs.
type Status struct {
// A developer-facing human readable error message.
Message string `json:"message,omitempty"`
// The status code.
Code StatusCode `json:"code,omitempty"`
}

View File

@ -0,0 +1,189 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
)
// Traces represents the traces data that can be stored in a persistent storage,
// OR can be embedded by other protocols that transfer OTLP traces data but do
// not implement the OTLP protocol.
//
// The main difference between this message and collector protocol is that
// in this message there will not be any "control" or "metadata" specific to
// OTLP protocol.
//
// When new fields are added into this message, the OTLP request MUST be updated
// as well.
type Traces struct {
// An array of ResourceSpans.
// For data coming from a single resource this array will typically contain
// one element. Intermediary nodes that receive data from multiple origins
// typically batch the data before forwarding further and in that case this
// array will contain multiple elements.
ResourceSpans []*ResourceSpans `json:"resourceSpans,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into td.
func (td *Traces) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid TracesData type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid TracesData field: %#v", keyIface)
}
switch key {
case "resourceSpans", "resource_spans":
err = decoder.Decode(&td.ResourceSpans)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// A collection of ScopeSpans from a Resource.
type ResourceSpans struct {
// The resource for the spans in this message.
// If this field is not set then no resource info is known.
Resource Resource `json:"resource"`
// A list of ScopeSpans that originate from a resource.
ScopeSpans []*ScopeSpans `json:"scopeSpans,omitempty"`
// This schema_url applies to the data in the "resource" field. It does not apply
// to the data in the "scope_spans" field which have their own schema_url field.
SchemaURL string `json:"schemaUrl,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into rs.
func (rs *ResourceSpans) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid ResourceSpans type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid ResourceSpans field: %#v", keyIface)
}
switch key {
case "resource":
err = decoder.Decode(&rs.Resource)
case "scopeSpans", "scope_spans":
err = decoder.Decode(&rs.ScopeSpans)
case "schemaUrl", "schema_url":
err = decoder.Decode(&rs.SchemaURL)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// A collection of Spans produced by an InstrumentationScope.
type ScopeSpans struct {
// The instrumentation scope information for the spans in this message.
// Semantically when InstrumentationScope isn't set, it is equivalent with
// an empty instrumentation scope name (unknown).
Scope *Scope `json:"scope"`
// A list of Spans that originate from an instrumentation scope.
Spans []*Span `json:"spans,omitempty"`
// The Schema URL, if known. This is the identifier of the Schema that the span data
// is recorded in. To learn more about Schema URL see
// https://opentelemetry.io/docs/specs/otel/schemas/#schema-url
// This schema_url applies to all spans and span events in the "spans" field.
SchemaURL string `json:"schemaUrl,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into ss.
func (ss *ScopeSpans) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid ScopeSpans type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid ScopeSpans field: %#v", keyIface)
}
switch key {
case "scope":
err = decoder.Decode(&ss.Scope)
case "spans":
err = decoder.Decode(&ss.Spans)
case "schemaUrl", "schema_url":
err = decoder.Decode(&ss.SchemaURL)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,453 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry // import "go.opentelemetry.io/otel/trace/internal/telemetry"
import (
"bytes"
"cmp"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"math"
"slices"
"strconv"
"unsafe"
)
// A Value represents a structured value.
// A zero value is valid and represents an empty value.
type Value struct {
// Ensure forward compatibility by explicitly making this not comparable.
noCmp [0]func() //nolint: unused // This is indeed used.
// num holds the value for Int64, Float64, and Bool. It holds the length
// for String, Bytes, Slice, Map.
num uint64
// any holds either the KindBool, KindInt64, KindFloat64, stringptr,
// bytesptr, sliceptr, or mapptr. If KindBool, KindInt64, or KindFloat64
// then the value of Value is in num as described above. Otherwise, it
// contains the value wrapped in the appropriate type.
any any
}
type (
// sliceptr represents a value in Value.any for KindString Values.
stringptr *byte
// bytesptr represents a value in Value.any for KindBytes Values.
bytesptr *byte
// sliceptr represents a value in Value.any for KindSlice Values.
sliceptr *Value
// mapptr represents a value in Value.any for KindMap Values.
mapptr *Attr
)
// ValueKind is the kind of a [Value].
type ValueKind int
// ValueKind values.
const (
ValueKindEmpty ValueKind = iota
ValueKindBool
ValueKindFloat64
ValueKindInt64
ValueKindString
ValueKindBytes
ValueKindSlice
ValueKindMap
)
var valueKindStrings = []string{
"Empty",
"Bool",
"Float64",
"Int64",
"String",
"Bytes",
"Slice",
"Map",
}
func (k ValueKind) String() string {
if k >= 0 && int(k) < len(valueKindStrings) {
return valueKindStrings[k]
}
return "<unknown telemetry.ValueKind>"
}
// StringValue returns a new [Value] for a string.
func StringValue(v string) Value {
return Value{
num: uint64(len(v)),
any: stringptr(unsafe.StringData(v)),
}
}
// IntValue returns a [Value] for an int.
func IntValue(v int) Value { return Int64Value(int64(v)) }
// Int64Value returns a [Value] for an int64.
func Int64Value(v int64) Value {
return Value{
num: uint64(v), // nolint: gosec // Store raw bytes.
any: ValueKindInt64,
}
}
// Float64Value returns a [Value] for a float64.
func Float64Value(v float64) Value {
return Value{num: math.Float64bits(v), any: ValueKindFloat64}
}
// BoolValue returns a [Value] for a bool.
func BoolValue(v bool) Value { //nolint:revive // Not a control flag.
var n uint64
if v {
n = 1
}
return Value{num: n, any: ValueKindBool}
}
// BytesValue returns a [Value] for a byte slice. The passed slice must not be
// changed after it is passed.
func BytesValue(v []byte) Value {
return Value{
num: uint64(len(v)),
any: bytesptr(unsafe.SliceData(v)),
}
}
// SliceValue returns a [Value] for a slice of [Value]. The passed slice must
// not be changed after it is passed.
func SliceValue(vs ...Value) Value {
return Value{
num: uint64(len(vs)),
any: sliceptr(unsafe.SliceData(vs)),
}
}
// MapValue returns a new [Value] for a slice of key-value pairs. The passed
// slice must not be changed after it is passed.
func MapValue(kvs ...Attr) Value {
return Value{
num: uint64(len(kvs)),
any: mapptr(unsafe.SliceData(kvs)),
}
}
// AsString returns the value held by v as a string.
func (v Value) AsString() string {
if sp, ok := v.any.(stringptr); ok {
return unsafe.String(sp, v.num)
}
// TODO: error handle
return ""
}
// asString returns the value held by v as a string. It will panic if the Value
// is not KindString.
func (v Value) asString() string {
return unsafe.String(v.any.(stringptr), v.num)
}
// AsInt64 returns the value held by v as an int64.
func (v Value) AsInt64() int64 {
if v.Kind() != ValueKindInt64 {
// TODO: error handle
return 0
}
return v.asInt64()
}
// asInt64 returns the value held by v as an int64. If v is not of KindInt64,
// this will return garbage.
func (v Value) asInt64() int64 {
// Assumes v.num was a valid int64 (overflow not checked).
return int64(v.num) // nolint: gosec
}
// AsBool returns the value held by v as a bool.
func (v Value) AsBool() bool {
if v.Kind() != ValueKindBool {
// TODO: error handle
return false
}
return v.asBool()
}
// asBool returns the value held by v as a bool. If v is not of KindBool, this
// will return garbage.
func (v Value) asBool() bool { return v.num == 1 }
// AsFloat64 returns the value held by v as a float64.
func (v Value) AsFloat64() float64 {
if v.Kind() != ValueKindFloat64 {
// TODO: error handle
return 0
}
return v.asFloat64()
}
// asFloat64 returns the value held by v as a float64. If v is not of
// KindFloat64, this will return garbage.
func (v Value) asFloat64() float64 { return math.Float64frombits(v.num) }
// AsBytes returns the value held by v as a []byte.
func (v Value) AsBytes() []byte {
if sp, ok := v.any.(bytesptr); ok {
return unsafe.Slice((*byte)(sp), v.num)
}
// TODO: error handle
return nil
}
// asBytes returns the value held by v as a []byte. It will panic if the Value
// is not KindBytes.
func (v Value) asBytes() []byte {
return unsafe.Slice((*byte)(v.any.(bytesptr)), v.num)
}
// AsSlice returns the value held by v as a []Value.
func (v Value) AsSlice() []Value {
if sp, ok := v.any.(sliceptr); ok {
return unsafe.Slice((*Value)(sp), v.num)
}
// TODO: error handle
return nil
}
// asSlice returns the value held by v as a []Value. It will panic if the Value
// is not KindSlice.
func (v Value) asSlice() []Value {
return unsafe.Slice((*Value)(v.any.(sliceptr)), v.num)
}
// AsMap returns the value held by v as a []Attr.
func (v Value) AsMap() []Attr {
if sp, ok := v.any.(mapptr); ok {
return unsafe.Slice((*Attr)(sp), v.num)
}
// TODO: error handle
return nil
}
// asMap returns the value held by v as a []Attr. It will panic if the
// Value is not KindMap.
func (v Value) asMap() []Attr {
return unsafe.Slice((*Attr)(v.any.(mapptr)), v.num)
}
// Kind returns the Kind of v.
func (v Value) Kind() ValueKind {
switch x := v.any.(type) {
case ValueKind:
return x
case stringptr:
return ValueKindString
case bytesptr:
return ValueKindBytes
case sliceptr:
return ValueKindSlice
case mapptr:
return ValueKindMap
default:
return ValueKindEmpty
}
}
// Empty returns if v does not hold any value.
func (v Value) Empty() bool { return v.Kind() == ValueKindEmpty }
// Equal returns if v is equal to w.
func (v Value) Equal(w Value) bool {
k1 := v.Kind()
k2 := w.Kind()
if k1 != k2 {
return false
}
switch k1 {
case ValueKindInt64, ValueKindBool:
return v.num == w.num
case ValueKindString:
return v.asString() == w.asString()
case ValueKindFloat64:
return v.asFloat64() == w.asFloat64()
case ValueKindSlice:
return slices.EqualFunc(v.asSlice(), w.asSlice(), Value.Equal)
case ValueKindMap:
sv := sortMap(v.asMap())
sw := sortMap(w.asMap())
return slices.EqualFunc(sv, sw, Attr.Equal)
case ValueKindBytes:
return bytes.Equal(v.asBytes(), w.asBytes())
case ValueKindEmpty:
return true
default:
// TODO: error handle
return false
}
}
func sortMap(m []Attr) []Attr {
sm := make([]Attr, len(m))
copy(sm, m)
slices.SortFunc(sm, func(a, b Attr) int {
return cmp.Compare(a.Key, b.Key)
})
return sm
}
// String returns Value's value as a string, formatted like [fmt.Sprint].
//
// The returned string is meant for debugging;
// the string representation is not stable.
func (v Value) String() string {
switch v.Kind() {
case ValueKindString:
return v.asString()
case ValueKindInt64:
// Assumes v.num was a valid int64 (overflow not checked).
return strconv.FormatInt(int64(v.num), 10) // nolint: gosec
case ValueKindFloat64:
return strconv.FormatFloat(v.asFloat64(), 'g', -1, 64)
case ValueKindBool:
return strconv.FormatBool(v.asBool())
case ValueKindBytes:
return fmt.Sprint(v.asBytes())
case ValueKindMap:
return fmt.Sprint(v.asMap())
case ValueKindSlice:
return fmt.Sprint(v.asSlice())
case ValueKindEmpty:
return "<nil>"
default:
// Try to handle this as gracefully as possible.
//
// Don't panic here. The goal here is to have developers find this
// first if a slog.Kind is is not handled. It is
// preferable to have user's open issue asking why their attributes
// have a "unhandled: " prefix than say that their code is panicking.
return fmt.Sprintf("<unhandled telemetry.ValueKind: %s>", v.Kind())
}
}
// MarshalJSON encodes v into OTLP formatted JSON.
func (v *Value) MarshalJSON() ([]byte, error) {
switch v.Kind() {
case ValueKindString:
return json.Marshal(struct {
Value string `json:"stringValue"`
}{v.asString()})
case ValueKindInt64:
return json.Marshal(struct {
Value string `json:"intValue"`
}{strconv.FormatInt(int64(v.num), 10)}) // nolint: gosec // From raw bytes.
case ValueKindFloat64:
return json.Marshal(struct {
Value float64 `json:"doubleValue"`
}{v.asFloat64()})
case ValueKindBool:
return json.Marshal(struct {
Value bool `json:"boolValue"`
}{v.asBool()})
case ValueKindBytes:
return json.Marshal(struct {
Value []byte `json:"bytesValue"`
}{v.asBytes()})
case ValueKindMap:
return json.Marshal(struct {
Value struct {
Values []Attr `json:"values"`
} `json:"kvlistValue"`
}{struct {
Values []Attr `json:"values"`
}{v.asMap()}})
case ValueKindSlice:
return json.Marshal(struct {
Value struct {
Values []Value `json:"values"`
} `json:"arrayValue"`
}{struct {
Values []Value `json:"values"`
}{v.asSlice()}})
case ValueKindEmpty:
return nil, nil
default:
return nil, fmt.Errorf("unknown Value kind: %s", v.Kind().String())
}
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into v.
func (v *Value) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Value type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Value key: %#v", keyIface)
}
switch key {
case "stringValue", "string_value":
var val string
err = decoder.Decode(&val)
*v = StringValue(val)
case "boolValue", "bool_value":
var val bool
err = decoder.Decode(&val)
*v = BoolValue(val)
case "intValue", "int_value":
var val protoInt64
err = decoder.Decode(&val)
*v = Int64Value(val.Int64())
case "doubleValue", "double_value":
var val float64
err = decoder.Decode(&val)
*v = Float64Value(val)
case "bytesValue", "bytes_value":
var val64 string
if err := decoder.Decode(&val64); err != nil {
return err
}
var val []byte
val, err = base64.StdEncoding.DecodeString(val64)
*v = BytesValue(val)
case "arrayValue", "array_value":
var val struct{ Values []Value }
err = decoder.Decode(&val)
*v = SliceValue(val.Values...)
case "kvlistValue", "kvlist_value":
var val struct{ Values []Attr }
err = decoder.Decode(&val)
*v = MapValue(val.Values...)
default:
// Skip unknown.
continue
}
// Use first valid. Ignore the rest.
return err
}
// Only unknown fields. Return nil without unmarshaling any value.
return nil
}

View File

@ -82,4 +82,22 @@ func (noopSpan) AddLink(Link) {}
func (noopSpan) SetName(string) {} func (noopSpan) SetName(string) {}
// TracerProvider returns a no-op TracerProvider. // TracerProvider returns a no-op TracerProvider.
func (noopSpan) TracerProvider() TracerProvider { return noopTracerProvider{} } func (s noopSpan) TracerProvider() TracerProvider {
return s.tracerProvider(autoInstEnabled)
}
// autoInstEnabled defines if the auto-instrumentation SDK is enabled.
//
// The auto-instrumentation is expected to overwrite this value to true when it
// attaches to the process.
var autoInstEnabled = new(bool)
// tracerProvider return a noopTracerProvider if autoEnabled is false,
// otherwise it will return a TracerProvider from the sdk package used in
// auto-instrumentation.
func (noopSpan) tracerProvider(autoEnabled *bool) TracerProvider {
if *autoEnabled {
return newAutoTracerProvider()
}
return noopTracerProvider{}
}

View File

@ -5,5 +5,5 @@ package otel // import "go.opentelemetry.io/otel"
// Version is the current release version of OpenTelemetry in use. // Version is the current release version of OpenTelemetry in use.
func Version() string { func Version() string {
return "1.34.0" return "1.35.0"
} }

View File

@ -3,7 +3,7 @@
module-sets: module-sets:
stable-v1: stable-v1:
version: v1.34.0 version: v1.35.0
modules: modules:
- go.opentelemetry.io/otel - go.opentelemetry.io/otel
- go.opentelemetry.io/otel/bridge/opencensus - go.opentelemetry.io/otel/bridge/opencensus
@ -23,11 +23,11 @@ module-sets:
- go.opentelemetry.io/otel/sdk/metric - go.opentelemetry.io/otel/sdk/metric
- go.opentelemetry.io/otel/trace - go.opentelemetry.io/otel/trace
experimental-metrics: experimental-metrics:
version: v0.56.0 version: v0.57.0
modules: modules:
- go.opentelemetry.io/otel/exporters/prometheus - go.opentelemetry.io/otel/exporters/prometheus
experimental-logs: experimental-logs:
version: v0.10.0 version: v0.11.0
modules: modules:
- go.opentelemetry.io/otel/log - go.opentelemetry.io/otel/log
- go.opentelemetry.io/otel/sdk/log - go.opentelemetry.io/otel/sdk/log
@ -40,3 +40,4 @@ module-sets:
- go.opentelemetry.io/otel/schema - go.opentelemetry.io/otel/schema
excluded-modules: excluded-modules:
- go.opentelemetry.io/otel/internal/tools - go.opentelemetry.io/otel/internal/tools
- go.opentelemetry.io/otel/trace/internal/telemetry/test

View File

@ -20,14 +20,19 @@ import (
// returned by MultiAlgorithmSigner and don't appear in the Signature.Format // returned by MultiAlgorithmSigner and don't appear in the Signature.Format
// field. // field.
const ( const (
CertAlgoRSAv01 = "ssh-rsa-cert-v01@openssh.com" CertAlgoRSAv01 = "ssh-rsa-cert-v01@openssh.com"
CertAlgoDSAv01 = "ssh-dss-cert-v01@openssh.com" // Deprecated: DSA is only supported at insecure key sizes, and was removed
CertAlgoECDSA256v01 = "ecdsa-sha2-nistp256-cert-v01@openssh.com" // from major implementations.
CertAlgoECDSA384v01 = "ecdsa-sha2-nistp384-cert-v01@openssh.com" CertAlgoDSAv01 = InsecureCertAlgoDSAv01
CertAlgoECDSA521v01 = "ecdsa-sha2-nistp521-cert-v01@openssh.com" // Deprecated: DSA is only supported at insecure key sizes, and was removed
CertAlgoSKECDSA256v01 = "sk-ecdsa-sha2-nistp256-cert-v01@openssh.com" // from major implementations.
CertAlgoED25519v01 = "ssh-ed25519-cert-v01@openssh.com" InsecureCertAlgoDSAv01 = "ssh-dss-cert-v01@openssh.com"
CertAlgoSKED25519v01 = "sk-ssh-ed25519-cert-v01@openssh.com" CertAlgoECDSA256v01 = "ecdsa-sha2-nistp256-cert-v01@openssh.com"
CertAlgoECDSA384v01 = "ecdsa-sha2-nistp384-cert-v01@openssh.com"
CertAlgoECDSA521v01 = "ecdsa-sha2-nistp521-cert-v01@openssh.com"
CertAlgoSKECDSA256v01 = "sk-ecdsa-sha2-nistp256-cert-v01@openssh.com"
CertAlgoED25519v01 = "ssh-ed25519-cert-v01@openssh.com"
CertAlgoSKED25519v01 = "sk-ssh-ed25519-cert-v01@openssh.com"
// CertAlgoRSASHA256v01 and CertAlgoRSASHA512v01 can't appear as a // CertAlgoRSASHA256v01 and CertAlgoRSASHA512v01 can't appear as a
// Certificate.Type (or PublicKey.Type), but only in // Certificate.Type (or PublicKey.Type), but only in
@ -485,16 +490,16 @@ func (c *Certificate) SignCert(rand io.Reader, authority Signer) error {
// //
// This map must be kept in sync with the one in agent/client.go. // This map must be kept in sync with the one in agent/client.go.
var certKeyAlgoNames = map[string]string{ var certKeyAlgoNames = map[string]string{
CertAlgoRSAv01: KeyAlgoRSA, CertAlgoRSAv01: KeyAlgoRSA,
CertAlgoRSASHA256v01: KeyAlgoRSASHA256, CertAlgoRSASHA256v01: KeyAlgoRSASHA256,
CertAlgoRSASHA512v01: KeyAlgoRSASHA512, CertAlgoRSASHA512v01: KeyAlgoRSASHA512,
CertAlgoDSAv01: KeyAlgoDSA, InsecureCertAlgoDSAv01: InsecureKeyAlgoDSA,
CertAlgoECDSA256v01: KeyAlgoECDSA256, CertAlgoECDSA256v01: KeyAlgoECDSA256,
CertAlgoECDSA384v01: KeyAlgoECDSA384, CertAlgoECDSA384v01: KeyAlgoECDSA384,
CertAlgoECDSA521v01: KeyAlgoECDSA521, CertAlgoECDSA521v01: KeyAlgoECDSA521,
CertAlgoSKECDSA256v01: KeyAlgoSKECDSA256, CertAlgoSKECDSA256v01: KeyAlgoSKECDSA256,
CertAlgoED25519v01: KeyAlgoED25519, CertAlgoED25519v01: KeyAlgoED25519,
CertAlgoSKED25519v01: KeyAlgoSKED25519, CertAlgoSKED25519v01: KeyAlgoSKED25519,
} }
// underlyingAlgo returns the signature algorithm associated with algo (which is // underlyingAlgo returns the signature algorithm associated with algo (which is

View File

@ -58,11 +58,11 @@ func newRC4(key, iv []byte) (cipher.Stream, error) {
type cipherMode struct { type cipherMode struct {
keySize int keySize int
ivSize int ivSize int
create func(key, iv []byte, macKey []byte, algs directionAlgorithms) (packetCipher, error) create func(key, iv []byte, macKey []byte, algs DirectionAlgorithms) (packetCipher, error)
} }
func streamCipherMode(skip int, createFunc func(key, iv []byte) (cipher.Stream, error)) func(key, iv []byte, macKey []byte, algs directionAlgorithms) (packetCipher, error) { func streamCipherMode(skip int, createFunc func(key, iv []byte) (cipher.Stream, error)) func(key, iv []byte, macKey []byte, algs DirectionAlgorithms) (packetCipher, error) {
return func(key, iv, macKey []byte, algs directionAlgorithms) (packetCipher, error) { return func(key, iv, macKey []byte, algs DirectionAlgorithms) (packetCipher, error) {
stream, err := createFunc(key, iv) stream, err := createFunc(key, iv)
if err != nil { if err != nil {
return nil, err return nil, err
@ -98,36 +98,36 @@ func streamCipherMode(skip int, createFunc func(key, iv []byte) (cipher.Stream,
var cipherModes = map[string]*cipherMode{ var cipherModes = map[string]*cipherMode{
// Ciphers from RFC 4344, which introduced many CTR-based ciphers. Algorithms // Ciphers from RFC 4344, which introduced many CTR-based ciphers. Algorithms
// are defined in the order specified in the RFC. // are defined in the order specified in the RFC.
"aes128-ctr": {16, aes.BlockSize, streamCipherMode(0, newAESCTR)}, CipherAES128CTR: {16, aes.BlockSize, streamCipherMode(0, newAESCTR)},
"aes192-ctr": {24, aes.BlockSize, streamCipherMode(0, newAESCTR)}, CipherAES192CTR: {24, aes.BlockSize, streamCipherMode(0, newAESCTR)},
"aes256-ctr": {32, aes.BlockSize, streamCipherMode(0, newAESCTR)}, CipherAES256CTR: {32, aes.BlockSize, streamCipherMode(0, newAESCTR)},
// Ciphers from RFC 4345, which introduces security-improved arcfour ciphers. // Ciphers from RFC 4345, which introduces security-improved arcfour ciphers.
// They are defined in the order specified in the RFC. // They are defined in the order specified in the RFC.
"arcfour128": {16, 0, streamCipherMode(1536, newRC4)}, InsecureCipherRC4128: {16, 0, streamCipherMode(1536, newRC4)},
"arcfour256": {32, 0, streamCipherMode(1536, newRC4)}, InsecureCipherRC4256: {32, 0, streamCipherMode(1536, newRC4)},
// Cipher defined in RFC 4253, which describes SSH Transport Layer Protocol. // Cipher defined in RFC 4253, which describes SSH Transport Layer Protocol.
// Note that this cipher is not safe, as stated in RFC 4253: "Arcfour (and // Note that this cipher is not safe, as stated in RFC 4253: "Arcfour (and
// RC4) has problems with weak keys, and should be used with caution." // RC4) has problems with weak keys, and should be used with caution."
// RFC 4345 introduces improved versions of Arcfour. // RFC 4345 introduces improved versions of Arcfour.
"arcfour": {16, 0, streamCipherMode(0, newRC4)}, InsecureCipherRC4: {16, 0, streamCipherMode(0, newRC4)},
// AEAD ciphers // AEAD ciphers
gcm128CipherID: {16, 12, newGCMCipher}, CipherAES128GCM: {16, 12, newGCMCipher},
gcm256CipherID: {32, 12, newGCMCipher}, CipherAES256GCM: {32, 12, newGCMCipher},
chacha20Poly1305ID: {64, 0, newChaCha20Cipher}, CipherChaCha20Poly1305: {64, 0, newChaCha20Cipher},
// CBC mode is insecure and so is not included in the default config. // CBC mode is insecure and so is not included in the default config.
// (See https://www.ieee-security.org/TC/SP2013/papers/4977a526.pdf). If absolutely // (See https://www.ieee-security.org/TC/SP2013/papers/4977a526.pdf). If absolutely
// needed, it's possible to specify a custom Config to enable it. // needed, it's possible to specify a custom Config to enable it.
// You should expect that an active attacker can recover plaintext if // You should expect that an active attacker can recover plaintext if
// you do. // you do.
aes128cbcID: {16, aes.BlockSize, newAESCBCCipher}, InsecureCipherAES128CBC: {16, aes.BlockSize, newAESCBCCipher},
// 3des-cbc is insecure and is not included in the default // 3des-cbc is insecure and is not included in the default
// config. // config.
tripledescbcID: {24, des.BlockSize, newTripleDESCBCCipher}, InsecureCipherTripleDESCBC: {24, des.BlockSize, newTripleDESCBCCipher},
} }
// prefixLen is the length of the packet prefix that contains the packet length // prefixLen is the length of the packet prefix that contains the packet length
@ -307,7 +307,7 @@ type gcmCipher struct {
buf []byte buf []byte
} }
func newGCMCipher(key, iv, unusedMacKey []byte, unusedAlgs directionAlgorithms) (packetCipher, error) { func newGCMCipher(key, iv, unusedMacKey []byte, unusedAlgs DirectionAlgorithms) (packetCipher, error) {
c, err := aes.NewCipher(key) c, err := aes.NewCipher(key)
if err != nil { if err != nil {
return nil, err return nil, err
@ -429,7 +429,7 @@ type cbcCipher struct {
oracleCamouflage uint32 oracleCamouflage uint32
} }
func newCBCCipher(c cipher.Block, key, iv, macKey []byte, algs directionAlgorithms) (packetCipher, error) { func newCBCCipher(c cipher.Block, key, iv, macKey []byte, algs DirectionAlgorithms) (packetCipher, error) {
cbc := &cbcCipher{ cbc := &cbcCipher{
mac: macModes[algs.MAC].new(macKey), mac: macModes[algs.MAC].new(macKey),
decrypter: cipher.NewCBCDecrypter(c, iv), decrypter: cipher.NewCBCDecrypter(c, iv),
@ -443,7 +443,7 @@ func newCBCCipher(c cipher.Block, key, iv, macKey []byte, algs directionAlgorith
return cbc, nil return cbc, nil
} }
func newAESCBCCipher(key, iv, macKey []byte, algs directionAlgorithms) (packetCipher, error) { func newAESCBCCipher(key, iv, macKey []byte, algs DirectionAlgorithms) (packetCipher, error) {
c, err := aes.NewCipher(key) c, err := aes.NewCipher(key)
if err != nil { if err != nil {
return nil, err return nil, err
@ -457,7 +457,7 @@ func newAESCBCCipher(key, iv, macKey []byte, algs directionAlgorithms) (packetCi
return cbc, nil return cbc, nil
} }
func newTripleDESCBCCipher(key, iv, macKey []byte, algs directionAlgorithms) (packetCipher, error) { func newTripleDESCBCCipher(key, iv, macKey []byte, algs DirectionAlgorithms) (packetCipher, error) {
c, err := des.NewTripleDESCipher(key) c, err := des.NewTripleDESCipher(key)
if err != nil { if err != nil {
return nil, err return nil, err
@ -635,8 +635,6 @@ func (c *cbcCipher) writeCipherPacket(seqNum uint32, w io.Writer, rand io.Reader
return nil return nil
} }
const chacha20Poly1305ID = "chacha20-poly1305@openssh.com"
// chacha20Poly1305Cipher implements the chacha20-poly1305@openssh.com // chacha20Poly1305Cipher implements the chacha20-poly1305@openssh.com
// AEAD, which is described here: // AEAD, which is described here:
// //
@ -650,7 +648,7 @@ type chacha20Poly1305Cipher struct {
buf []byte buf []byte
} }
func newChaCha20Cipher(key, unusedIV, unusedMACKey []byte, unusedAlgs directionAlgorithms) (packetCipher, error) { func newChaCha20Cipher(key, unusedIV, unusedMACKey []byte, unusedAlgs DirectionAlgorithms) (packetCipher, error) {
if len(key) != 64 { if len(key) != 64 {
panic(len(key)) panic(len(key))
} }

View File

@ -110,6 +110,7 @@ func (c *connection) clientHandshake(dialAddress string, config *ClientConfig) e
} }
c.sessionID = c.transport.getSessionID() c.sessionID = c.transport.getSessionID()
c.algorithms = c.transport.getAlgorithms()
return c.clientAuthenticate(config) return c.clientAuthenticate(config)
} }

View File

@ -10,6 +10,7 @@ import (
"fmt" "fmt"
"io" "io"
"math" "math"
"slices"
"sync" "sync"
_ "crypto/sha1" _ "crypto/sha1"
@ -24,69 +25,258 @@ const (
serviceSSH = "ssh-connection" serviceSSH = "ssh-connection"
) )
// supportedCiphers lists ciphers we support but might not recommend. // The ciphers currently or previously implemented by this library, to use in
var supportedCiphers = []string{ // [Config.Ciphers]. For a list, see the [Algorithms.Ciphers] returned by
"aes128-ctr", "aes192-ctr", "aes256-ctr", // [SupportedAlgorithms] or [InsecureAlgorithms].
"aes128-gcm@openssh.com", gcm256CipherID, const (
chacha20Poly1305ID, CipherAES128GCM = "aes128-gcm@openssh.com"
"arcfour256", "arcfour128", "arcfour", CipherAES256GCM = "aes256-gcm@openssh.com"
aes128cbcID, CipherChaCha20Poly1305 = "chacha20-poly1305@openssh.com"
tripledescbcID, CipherAES128CTR = "aes128-ctr"
CipherAES192CTR = "aes192-ctr"
CipherAES256CTR = "aes256-ctr"
InsecureCipherAES128CBC = "aes128-cbc"
InsecureCipherTripleDESCBC = "3des-cbc"
InsecureCipherRC4 = "arcfour"
InsecureCipherRC4128 = "arcfour128"
InsecureCipherRC4256 = "arcfour256"
)
// The key exchanges currently or previously implemented by this library, to use
// in [Config.KeyExchanges]. For a list, see the
// [Algorithms.KeyExchanges] returned by [SupportedAlgorithms] or
// [InsecureAlgorithms].
const (
InsecureKeyExchangeDH1SHA1 = "diffie-hellman-group1-sha1"
InsecureKeyExchangeDH14SHA1 = "diffie-hellman-group14-sha1"
KeyExchangeDH14SHA256 = "diffie-hellman-group14-sha256"
KeyExchangeDH16SHA512 = "diffie-hellman-group16-sha512"
KeyExchangeECDHP256 = "ecdh-sha2-nistp256"
KeyExchangeECDHP384 = "ecdh-sha2-nistp384"
KeyExchangeECDHP521 = "ecdh-sha2-nistp521"
KeyExchangeCurve25519 = "curve25519-sha256"
InsecureKeyExchangeDHGEXSHA1 = "diffie-hellman-group-exchange-sha1"
KeyExchangeDHGEXSHA256 = "diffie-hellman-group-exchange-sha256"
// KeyExchangeMLKEM768X25519 is supported from Go 1.24.
KeyExchangeMLKEM768X25519 = "mlkem768x25519-sha256"
// An alias for KeyExchangeCurve25519SHA256. This kex ID will be added if
// KeyExchangeCurve25519SHA256 is requested for backward compatibility with
// OpenSSH versions up to 7.2.
keyExchangeCurve25519LibSSH = "curve25519-sha256@libssh.org"
)
// The message authentication code (MAC) currently or previously implemented by
// this library, to use in [Config.MACs]. For a list, see the
// [Algorithms.MACs] returned by [SupportedAlgorithms] or
// [InsecureAlgorithms].
const (
HMACSHA256ETM = "hmac-sha2-256-etm@openssh.com"
HMACSHA512ETM = "hmac-sha2-512-etm@openssh.com"
HMACSHA256 = "hmac-sha2-256"
HMACSHA512 = "hmac-sha2-512"
HMACSHA1 = "hmac-sha1"
InsecureHMACSHA196 = "hmac-sha1-96"
)
var (
// supportedKexAlgos specifies key-exchange algorithms implemented by this
// package in preference order, excluding those with security issues.
supportedKexAlgos = []string{
KeyExchangeCurve25519,
KeyExchangeECDHP256,
KeyExchangeECDHP384,
KeyExchangeECDHP521,
KeyExchangeDH14SHA256,
KeyExchangeDH16SHA512,
KeyExchangeDHGEXSHA256,
}
// defaultKexAlgos specifies the default preference for key-exchange
// algorithms in preference order.
defaultKexAlgos = []string{
KeyExchangeCurve25519,
KeyExchangeECDHP256,
KeyExchangeECDHP384,
KeyExchangeECDHP521,
KeyExchangeDH14SHA256,
InsecureKeyExchangeDH14SHA1,
}
// insecureKexAlgos specifies key-exchange algorithms implemented by this
// package and which have security issues.
insecureKexAlgos = []string{
InsecureKeyExchangeDH14SHA1,
InsecureKeyExchangeDH1SHA1,
InsecureKeyExchangeDHGEXSHA1,
}
// supportedCiphers specifies cipher algorithms implemented by this package
// in preference order, excluding those with security issues.
supportedCiphers = []string{
CipherAES128GCM,
CipherAES256GCM,
CipherChaCha20Poly1305,
CipherAES128CTR,
CipherAES192CTR,
CipherAES256CTR,
}
// defaultCiphers specifies the default preference for ciphers algorithms
// in preference order.
defaultCiphers = supportedCiphers
// insecureCiphers specifies cipher algorithms implemented by this
// package and which have security issues.
insecureCiphers = []string{
InsecureCipherAES128CBC,
InsecureCipherTripleDESCBC,
InsecureCipherRC4256,
InsecureCipherRC4128,
InsecureCipherRC4,
}
// supportedMACs specifies MAC algorithms implemented by this package in
// preference order, excluding those with security issues.
supportedMACs = []string{
HMACSHA256ETM,
HMACSHA512ETM,
HMACSHA256,
HMACSHA512,
HMACSHA1,
}
// defaultMACs specifies the default preference for MAC algorithms in
// preference order.
defaultMACs = []string{
HMACSHA256ETM,
HMACSHA512ETM,
HMACSHA256,
HMACSHA512,
HMACSHA1,
InsecureHMACSHA196,
}
// insecureMACs specifies MAC algorithms implemented by this
// package and which have security issues.
insecureMACs = []string{
InsecureHMACSHA196,
}
// supportedHostKeyAlgos specifies the supported host-key algorithms (i.e.
// methods of authenticating servers) implemented by this package in
// preference order, excluding those with security issues.
supportedHostKeyAlgos = []string{
CertAlgoRSASHA256v01,
CertAlgoRSASHA512v01,
CertAlgoECDSA256v01,
CertAlgoECDSA384v01,
CertAlgoECDSA521v01,
CertAlgoED25519v01,
KeyAlgoRSASHA256,
KeyAlgoRSASHA512,
KeyAlgoECDSA256,
KeyAlgoECDSA384,
KeyAlgoECDSA521,
KeyAlgoED25519,
}
// defaultHostKeyAlgos specifies the default preference for host-key
// algorithms in preference order.
defaultHostKeyAlgos = []string{
CertAlgoRSASHA256v01,
CertAlgoRSASHA512v01,
CertAlgoRSAv01,
InsecureCertAlgoDSAv01,
CertAlgoECDSA256v01,
CertAlgoECDSA384v01,
CertAlgoECDSA521v01,
CertAlgoED25519v01,
KeyAlgoECDSA256,
KeyAlgoECDSA384,
KeyAlgoECDSA521,
KeyAlgoRSASHA256,
KeyAlgoRSASHA512,
KeyAlgoRSA,
InsecureKeyAlgoDSA,
KeyAlgoED25519,
}
// insecureHostKeyAlgos specifies host-key algorithms implemented by this
// package and which have security issues.
insecureHostKeyAlgos = []string{
KeyAlgoRSA,
InsecureKeyAlgoDSA,
CertAlgoRSAv01,
InsecureCertAlgoDSAv01,
}
// supportedPubKeyAuthAlgos specifies the supported client public key
// authentication algorithms. Note that this doesn't include certificate
// types since those use the underlying algorithm. Order is irrelevant.
supportedPubKeyAuthAlgos = []string{
KeyAlgoED25519,
KeyAlgoSKED25519,
KeyAlgoSKECDSA256,
KeyAlgoECDSA256,
KeyAlgoECDSA384,
KeyAlgoECDSA521,
KeyAlgoRSASHA256,
KeyAlgoRSASHA512,
}
// defaultPubKeyAuthAlgos specifies the preferred client public key
// authentication algorithms. This list is sent to the client if it supports
// the server-sig-algs extension. Order is irrelevant.
defaultPubKeyAuthAlgos = []string{
KeyAlgoED25519,
KeyAlgoSKED25519,
KeyAlgoSKECDSA256,
KeyAlgoECDSA256,
KeyAlgoECDSA384,
KeyAlgoECDSA521,
KeyAlgoRSASHA256,
KeyAlgoRSASHA512,
KeyAlgoRSA,
InsecureKeyAlgoDSA,
}
// insecurePubKeyAuthAlgos specifies client public key authentication
// algorithms implemented by this package and which have security issues.
insecurePubKeyAuthAlgos = []string{
KeyAlgoRSA,
InsecureKeyAlgoDSA,
}
)
// NegotiatedAlgorithms defines algorithms negotiated between client and server.
type NegotiatedAlgorithms struct {
KeyExchange string
HostKey string
Read DirectionAlgorithms
Write DirectionAlgorithms
} }
// preferredCiphers specifies the default preference for ciphers. // Algorithms defines a set of algorithms that can be configured in the client
var preferredCiphers = []string{ // or server config for negotiation during a handshake.
"aes128-gcm@openssh.com", gcm256CipherID, type Algorithms struct {
chacha20Poly1305ID, KeyExchanges []string
"aes128-ctr", "aes192-ctr", "aes256-ctr", Ciphers []string
MACs []string
HostKeys []string
PublicKeyAuths []string
} }
// supportedKexAlgos specifies the supported key-exchange algorithms in // SupportedAlgorithms returns algorithms currently implemented by this package,
// preference order. // excluding those with security issues, which are returned by
var supportedKexAlgos = []string{ // InsecureAlgorithms. The algorithms listed here are in preference order.
kexAlgoCurve25519SHA256, kexAlgoCurve25519SHA256LibSSH, func SupportedAlgorithms() Algorithms {
// P384 and P521 are not constant-time yet, but since we don't return Algorithms{
// reuse ephemeral keys, using them for ECDH should be OK. Ciphers: slices.Clone(supportedCiphers),
kexAlgoECDH256, kexAlgoECDH384, kexAlgoECDH521, MACs: slices.Clone(supportedMACs),
kexAlgoDH14SHA256, kexAlgoDH16SHA512, kexAlgoDH14SHA1, KeyExchanges: slices.Clone(supportedKexAlgos),
kexAlgoDH1SHA1, HostKeys: slices.Clone(supportedHostKeyAlgos),
PublicKeyAuths: slices.Clone(supportedPubKeyAuthAlgos),
}
} }
// serverForbiddenKexAlgos contains key exchange algorithms, that are forbidden // InsecureAlgorithms returns algorithms currently implemented by this package
// for the server half. // and which have security issues.
var serverForbiddenKexAlgos = map[string]struct{}{ func InsecureAlgorithms() Algorithms {
kexAlgoDHGEXSHA1: {}, // server half implementation is only minimal to satisfy the automated tests return Algorithms{
kexAlgoDHGEXSHA256: {}, // server half implementation is only minimal to satisfy the automated tests KeyExchanges: slices.Clone(insecureKexAlgos),
} Ciphers: slices.Clone(insecureCiphers),
MACs: slices.Clone(insecureMACs),
// preferredKexAlgos specifies the default preference for key-exchange HostKeys: slices.Clone(insecureHostKeyAlgos),
// algorithms in preference order. The diffie-hellman-group16-sha512 algorithm PublicKeyAuths: slices.Clone(insecurePubKeyAuthAlgos),
// is disabled by default because it is a bit slower than the others. }
var preferredKexAlgos = []string{
kexAlgoCurve25519SHA256, kexAlgoCurve25519SHA256LibSSH,
kexAlgoECDH256, kexAlgoECDH384, kexAlgoECDH521,
kexAlgoDH14SHA256, kexAlgoDH14SHA1,
}
// supportedHostKeyAlgos specifies the supported host-key algorithms (i.e. methods
// of authenticating servers) in preference order.
var supportedHostKeyAlgos = []string{
CertAlgoRSASHA256v01, CertAlgoRSASHA512v01,
CertAlgoRSAv01, CertAlgoDSAv01, CertAlgoECDSA256v01,
CertAlgoECDSA384v01, CertAlgoECDSA521v01, CertAlgoED25519v01,
KeyAlgoECDSA256, KeyAlgoECDSA384, KeyAlgoECDSA521,
KeyAlgoRSASHA256, KeyAlgoRSASHA512,
KeyAlgoRSA, KeyAlgoDSA,
KeyAlgoED25519,
}
// supportedMACs specifies a default set of MAC algorithms in preference order.
// This is based on RFC 4253, section 6.4, but with hmac-md5 variants removed
// because they have reached the end of their useful life.
var supportedMACs = []string{
"hmac-sha2-256-etm@openssh.com", "hmac-sha2-512-etm@openssh.com", "hmac-sha2-256", "hmac-sha2-512", "hmac-sha1", "hmac-sha1-96",
} }
var supportedCompressions = []string{compressionNone} var supportedCompressions = []string{compressionNone}
@ -94,13 +284,13 @@ var supportedCompressions = []string{compressionNone}
// hashFuncs keeps the mapping of supported signature algorithms to their // hashFuncs keeps the mapping of supported signature algorithms to their
// respective hashes needed for signing and verification. // respective hashes needed for signing and verification.
var hashFuncs = map[string]crypto.Hash{ var hashFuncs = map[string]crypto.Hash{
KeyAlgoRSA: crypto.SHA1, KeyAlgoRSA: crypto.SHA1,
KeyAlgoRSASHA256: crypto.SHA256, KeyAlgoRSASHA256: crypto.SHA256,
KeyAlgoRSASHA512: crypto.SHA512, KeyAlgoRSASHA512: crypto.SHA512,
KeyAlgoDSA: crypto.SHA1, InsecureKeyAlgoDSA: crypto.SHA1,
KeyAlgoECDSA256: crypto.SHA256, KeyAlgoECDSA256: crypto.SHA256,
KeyAlgoECDSA384: crypto.SHA384, KeyAlgoECDSA384: crypto.SHA384,
KeyAlgoECDSA521: crypto.SHA512, KeyAlgoECDSA521: crypto.SHA512,
// KeyAlgoED25519 doesn't pre-hash. // KeyAlgoED25519 doesn't pre-hash.
KeyAlgoSKECDSA256: crypto.SHA256, KeyAlgoSKECDSA256: crypto.SHA256,
KeyAlgoSKED25519: crypto.SHA256, KeyAlgoSKED25519: crypto.SHA256,
@ -135,18 +325,6 @@ func isRSACert(algo string) bool {
return isRSA(algo) return isRSA(algo)
} }
// supportedPubKeyAuthAlgos specifies the supported client public key
// authentication algorithms. Note that this doesn't include certificate types
// since those use the underlying algorithm. This list is sent to the client if
// it supports the server-sig-algs extension. Order is irrelevant.
var supportedPubKeyAuthAlgos = []string{
KeyAlgoED25519,
KeyAlgoSKED25519, KeyAlgoSKECDSA256,
KeyAlgoECDSA256, KeyAlgoECDSA384, KeyAlgoECDSA521,
KeyAlgoRSASHA256, KeyAlgoRSASHA512, KeyAlgoRSA,
KeyAlgoDSA,
}
// unexpectedMessageError results when the SSH message that we received didn't // unexpectedMessageError results when the SSH message that we received didn't
// match what we wanted. // match what we wanted.
func unexpectedMessageError(expected, got uint8) error { func unexpectedMessageError(expected, got uint8) error {
@ -169,20 +347,21 @@ func findCommon(what string, client []string, server []string) (common string, e
return "", fmt.Errorf("ssh: no common algorithm for %s; client offered: %v, server offered: %v", what, client, server) return "", fmt.Errorf("ssh: no common algorithm for %s; client offered: %v, server offered: %v", what, client, server)
} }
// directionAlgorithms records algorithm choices in one direction (either read or write) // DirectionAlgorithms defines the algorithms negotiated in one direction
type directionAlgorithms struct { // (either read or write).
type DirectionAlgorithms struct {
Cipher string Cipher string
MAC string MAC string
Compression string compression string
} }
// rekeyBytes returns a rekeying intervals in bytes. // rekeyBytes returns a rekeying intervals in bytes.
func (a *directionAlgorithms) rekeyBytes() int64 { func (a *DirectionAlgorithms) rekeyBytes() int64 {
// According to RFC 4344 block ciphers should rekey after // According to RFC 4344 block ciphers should rekey after
// 2^(BLOCKSIZE/4) blocks. For all AES flavors BLOCKSIZE is // 2^(BLOCKSIZE/4) blocks. For all AES flavors BLOCKSIZE is
// 128. // 128.
switch a.Cipher { switch a.Cipher {
case "aes128-ctr", "aes192-ctr", "aes256-ctr", gcm128CipherID, gcm256CipherID, aes128cbcID: case CipherAES128CTR, CipherAES192CTR, CipherAES256CTR, CipherAES128GCM, CipherAES256GCM, InsecureCipherAES128CBC:
return 16 * (1 << 32) return 16 * (1 << 32)
} }
@ -192,32 +371,25 @@ func (a *directionAlgorithms) rekeyBytes() int64 {
} }
var aeadCiphers = map[string]bool{ var aeadCiphers = map[string]bool{
gcm128CipherID: true, CipherAES128GCM: true,
gcm256CipherID: true, CipherAES256GCM: true,
chacha20Poly1305ID: true, CipherChaCha20Poly1305: true,
} }
type algorithms struct { func findAgreedAlgorithms(isClient bool, clientKexInit, serverKexInit *kexInitMsg) (algs *NegotiatedAlgorithms, err error) {
kex string result := &NegotiatedAlgorithms{}
hostKey string
w directionAlgorithms
r directionAlgorithms
}
func findAgreedAlgorithms(isClient bool, clientKexInit, serverKexInit *kexInitMsg) (algs *algorithms, err error) { result.KeyExchange, err = findCommon("key exchange", clientKexInit.KexAlgos, serverKexInit.KexAlgos)
result := &algorithms{}
result.kex, err = findCommon("key exchange", clientKexInit.KexAlgos, serverKexInit.KexAlgos)
if err != nil { if err != nil {
return return
} }
result.hostKey, err = findCommon("host key", clientKexInit.ServerHostKeyAlgos, serverKexInit.ServerHostKeyAlgos) result.HostKey, err = findCommon("host key", clientKexInit.ServerHostKeyAlgos, serverKexInit.ServerHostKeyAlgos)
if err != nil { if err != nil {
return return
} }
stoc, ctos := &result.w, &result.r stoc, ctos := &result.Write, &result.Read
if isClient { if isClient {
ctos, stoc = stoc, ctos ctos, stoc = stoc, ctos
} }
@ -246,12 +418,12 @@ func findAgreedAlgorithms(isClient bool, clientKexInit, serverKexInit *kexInitMs
} }
} }
ctos.Compression, err = findCommon("client to server compression", clientKexInit.CompressionClientServer, serverKexInit.CompressionClientServer) ctos.compression, err = findCommon("client to server compression", clientKexInit.CompressionClientServer, serverKexInit.CompressionClientServer)
if err != nil { if err != nil {
return return
} }
stoc.Compression, err = findCommon("server to client compression", clientKexInit.CompressionServerClient, serverKexInit.CompressionServerClient) stoc.compression, err = findCommon("server to client compression", clientKexInit.CompressionServerClient, serverKexInit.CompressionServerClient)
if err != nil { if err != nil {
return return
} }
@ -297,7 +469,7 @@ func (c *Config) SetDefaults() {
c.Rand = rand.Reader c.Rand = rand.Reader
} }
if c.Ciphers == nil { if c.Ciphers == nil {
c.Ciphers = preferredCiphers c.Ciphers = defaultCiphers
} }
var ciphers []string var ciphers []string
for _, c := range c.Ciphers { for _, c := range c.Ciphers {
@ -309,19 +481,22 @@ func (c *Config) SetDefaults() {
c.Ciphers = ciphers c.Ciphers = ciphers
if c.KeyExchanges == nil { if c.KeyExchanges == nil {
c.KeyExchanges = preferredKexAlgos c.KeyExchanges = defaultKexAlgos
} }
var kexs []string var kexs []string
for _, k := range c.KeyExchanges { for _, k := range c.KeyExchanges {
if kexAlgoMap[k] != nil { if kexAlgoMap[k] != nil {
// Ignore the KEX if we have no kexAlgoMap definition. // Ignore the KEX if we have no kexAlgoMap definition.
kexs = append(kexs, k) kexs = append(kexs, k)
if k == KeyExchangeCurve25519 && !contains(c.KeyExchanges, keyExchangeCurve25519LibSSH) {
kexs = append(kexs, keyExchangeCurve25519LibSSH)
}
} }
} }
c.KeyExchanges = kexs c.KeyExchanges = kexs
if c.MACs == nil { if c.MACs == nil {
c.MACs = supportedMACs c.MACs = defaultMACs
} }
var macs []string var macs []string
for _, m := range c.MACs { for _, m := range c.MACs {

View File

@ -74,6 +74,13 @@ type Conn interface {
// Disconnect // Disconnect
} }
// AlgorithmsConnMetadata is a ConnMetadata that can return the algorithms
// negotiated between client and server.
type AlgorithmsConnMetadata interface {
ConnMetadata
Algorithms() NegotiatedAlgorithms
}
// DiscardRequests consumes and rejects all requests from the // DiscardRequests consumes and rejects all requests from the
// passed-in channel. // passed-in channel.
func DiscardRequests(in <-chan *Request) { func DiscardRequests(in <-chan *Request) {
@ -106,6 +113,7 @@ type sshConn struct {
sessionID []byte sessionID []byte
clientVersion []byte clientVersion []byte
serverVersion []byte serverVersion []byte
algorithms NegotiatedAlgorithms
} }
func dup(src []byte) []byte { func dup(src []byte) []byte {
@ -141,3 +149,7 @@ func (c *sshConn) ClientVersion() []byte {
func (c *sshConn) ServerVersion() []byte { func (c *sshConn) ServerVersion() []byte {
return dup(c.serverVersion) return dup(c.serverVersion)
} }
func (c *sshConn) Algorithms() NegotiatedAlgorithms {
return c.algorithms
}

View File

@ -38,7 +38,7 @@ type keyingTransport interface {
// prepareKeyChange sets up a key change. The key change for a // prepareKeyChange sets up a key change. The key change for a
// direction will be effected if a msgNewKeys message is sent // direction will be effected if a msgNewKeys message is sent
// or received. // or received.
prepareKeyChange(*algorithms, *kexResult) error prepareKeyChange(*NegotiatedAlgorithms, *kexResult) error
// setStrictMode sets the strict KEX mode, notably triggering // setStrictMode sets the strict KEX mode, notably triggering
// sequence number resets on sending or receiving msgNewKeys. // sequence number resets on sending or receiving msgNewKeys.
@ -115,7 +115,7 @@ type handshakeTransport struct {
bannerCallback BannerCallback bannerCallback BannerCallback
// Algorithms agreed in the last key exchange. // Algorithms agreed in the last key exchange.
algorithms *algorithms algorithms *NegotiatedAlgorithms
// Counters exclusively owned by readLoop. // Counters exclusively owned by readLoop.
readPacketsLeft uint32 readPacketsLeft uint32
@ -164,7 +164,7 @@ func newClientTransport(conn keyingTransport, clientVersion, serverVersion []byt
if config.HostKeyAlgorithms != nil { if config.HostKeyAlgorithms != nil {
t.hostKeyAlgorithms = config.HostKeyAlgorithms t.hostKeyAlgorithms = config.HostKeyAlgorithms
} else { } else {
t.hostKeyAlgorithms = supportedHostKeyAlgos t.hostKeyAlgorithms = defaultHostKeyAlgos
} }
go t.readLoop() go t.readLoop()
go t.kexLoop() go t.kexLoop()
@ -184,6 +184,10 @@ func (t *handshakeTransport) getSessionID() []byte {
return t.sessionID return t.sessionID
} }
func (t *handshakeTransport) getAlgorithms() NegotiatedAlgorithms {
return *t.algorithms
}
// waitSession waits for the session to be established. This should be // waitSession waits for the session to be established. This should be
// the first thing to call after instantiating handshakeTransport. // the first thing to call after instantiating handshakeTransport.
func (t *handshakeTransport) waitSession() error { func (t *handshakeTransport) waitSession() error {
@ -290,7 +294,7 @@ func (t *handshakeTransport) resetWriteThresholds() {
if t.config.RekeyThreshold > 0 { if t.config.RekeyThreshold > 0 {
t.writeBytesLeft = int64(t.config.RekeyThreshold) t.writeBytesLeft = int64(t.config.RekeyThreshold)
} else if t.algorithms != nil { } else if t.algorithms != nil {
t.writeBytesLeft = t.algorithms.w.rekeyBytes() t.writeBytesLeft = t.algorithms.Write.rekeyBytes()
} else { } else {
t.writeBytesLeft = 1 << 30 t.writeBytesLeft = 1 << 30
} }
@ -407,7 +411,7 @@ func (t *handshakeTransport) resetReadThresholds() {
if t.config.RekeyThreshold > 0 { if t.config.RekeyThreshold > 0 {
t.readBytesLeft = int64(t.config.RekeyThreshold) t.readBytesLeft = int64(t.config.RekeyThreshold)
} else if t.algorithms != nil { } else if t.algorithms != nil {
t.readBytesLeft = t.algorithms.r.rekeyBytes() t.readBytesLeft = t.algorithms.Read.rekeyBytes()
} else { } else {
t.readBytesLeft = 1 << 30 t.readBytesLeft = 1 << 30
} }
@ -700,9 +704,9 @@ func (t *handshakeTransport) enterKeyExchange(otherInitPacket []byte) error {
} }
} }
kex, ok := kexAlgoMap[t.algorithms.kex] kex, ok := kexAlgoMap[t.algorithms.KeyExchange]
if !ok { if !ok {
return fmt.Errorf("ssh: unexpected key exchange algorithm %v", t.algorithms.kex) return fmt.Errorf("ssh: unexpected key exchange algorithm %v", t.algorithms.KeyExchange)
} }
var result *kexResult var result *kexResult
@ -809,12 +813,12 @@ func pickHostKey(hostKeys []Signer, algo string) AlgorithmSigner {
} }
func (t *handshakeTransport) server(kex kexAlgorithm, magics *handshakeMagics) (*kexResult, error) { func (t *handshakeTransport) server(kex kexAlgorithm, magics *handshakeMagics) (*kexResult, error) {
hostKey := pickHostKey(t.hostKeys, t.algorithms.hostKey) hostKey := pickHostKey(t.hostKeys, t.algorithms.HostKey)
if hostKey == nil { if hostKey == nil {
return nil, errors.New("ssh: internal error: negotiated unsupported signature type") return nil, errors.New("ssh: internal error: negotiated unsupported signature type")
} }
r, err := kex.Server(t.conn, t.config.Rand, magics, hostKey, t.algorithms.hostKey) r, err := kex.Server(t.conn, t.config.Rand, magics, hostKey, t.algorithms.HostKey)
return r, err return r, err
} }
@ -829,7 +833,7 @@ func (t *handshakeTransport) client(kex kexAlgorithm, magics *handshakeMagics) (
return nil, err return nil, err
} }
if err := verifyHostKeySignature(hostKey, t.algorithms.hostKey, result); err != nil { if err := verifyHostKeySignature(hostKey, t.algorithms.HostKey, result); err != nil {
return nil, err return nil, err
} }

107
vendor/golang.org/x/crypto/ssh/kex.go generated vendored
View File

@ -20,21 +20,18 @@ import (
) )
const ( const (
kexAlgoDH1SHA1 = "diffie-hellman-group1-sha1" // This is the group called diffie-hellman-group1-sha1 in RFC 4253 and
kexAlgoDH14SHA1 = "diffie-hellman-group14-sha1" // Oakley Group 2 in RFC 2409.
kexAlgoDH14SHA256 = "diffie-hellman-group14-sha256" oakleyGroup2 = "FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE65381FFFFFFFFFFFFFFFF"
kexAlgoDH16SHA512 = "diffie-hellman-group16-sha512" // This is the group called diffie-hellman-group14-sha1 in RFC 4253 and
kexAlgoECDH256 = "ecdh-sha2-nistp256" // Oakley Group 14 in RFC 3526.
kexAlgoECDH384 = "ecdh-sha2-nistp384" oakleyGroup14 = "FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AACAA68FFFFFFFFFFFFFFFF"
kexAlgoECDH521 = "ecdh-sha2-nistp521" // This is the group called diffie-hellman-group15-sha512 in RFC 8268 and
kexAlgoCurve25519SHA256LibSSH = "curve25519-sha256@libssh.org" // Oakley Group 15 in RFC 3526.
kexAlgoCurve25519SHA256 = "curve25519-sha256" oakleyGroup15 = "FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF"
// This is the group called diffie-hellman-group16-sha512 in RFC 8268 and
// For the following kex only the client half contains a production // Oakley Group 16 in RFC 3526.
// ready implementation. The server half only consists of a minimal oakleyGroup16 = "FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199FFFFFFFFFFFFFFFF"
// implementation to satisfy the automated tests.
kexAlgoDHGEXSHA1 = "diffie-hellman-group-exchange-sha1"
kexAlgoDHGEXSHA256 = "diffie-hellman-group-exchange-sha256"
) )
// kexResult captures the outcome of a key exchange. // kexResult captures the outcome of a key exchange.
@ -402,53 +399,46 @@ func ecHash(curve elliptic.Curve) crypto.Hash {
var kexAlgoMap = map[string]kexAlgorithm{} var kexAlgoMap = map[string]kexAlgorithm{}
func init() { func init() {
// This is the group called diffie-hellman-group1-sha1 in p, _ := new(big.Int).SetString(oakleyGroup2, 16)
// RFC 4253 and Oakley Group 2 in RFC 2409. kexAlgoMap[InsecureKeyExchangeDH1SHA1] = &dhGroup{
p, _ := new(big.Int).SetString("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE65381FFFFFFFFFFFFFFFF", 16)
kexAlgoMap[kexAlgoDH1SHA1] = &dhGroup{
g: new(big.Int).SetInt64(2), g: new(big.Int).SetInt64(2),
p: p, p: p,
pMinus1: new(big.Int).Sub(p, bigOne), pMinus1: new(big.Int).Sub(p, bigOne),
hashFunc: crypto.SHA1, hashFunc: crypto.SHA1,
} }
// This are the groups called diffie-hellman-group14-sha1 and p, _ = new(big.Int).SetString(oakleyGroup14, 16)
// diffie-hellman-group14-sha256 in RFC 4253 and RFC 8268,
// and Oakley Group 14 in RFC 3526.
p, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AACAA68FFFFFFFFFFFFFFFF", 16)
group14 := &dhGroup{ group14 := &dhGroup{
g: new(big.Int).SetInt64(2), g: new(big.Int).SetInt64(2),
p: p, p: p,
pMinus1: new(big.Int).Sub(p, bigOne), pMinus1: new(big.Int).Sub(p, bigOne),
} }
kexAlgoMap[kexAlgoDH14SHA1] = &dhGroup{ kexAlgoMap[InsecureKeyExchangeDH14SHA1] = &dhGroup{
g: group14.g, p: group14.p, pMinus1: group14.pMinus1, g: group14.g, p: group14.p, pMinus1: group14.pMinus1,
hashFunc: crypto.SHA1, hashFunc: crypto.SHA1,
} }
kexAlgoMap[kexAlgoDH14SHA256] = &dhGroup{ kexAlgoMap[KeyExchangeDH14SHA256] = &dhGroup{
g: group14.g, p: group14.p, pMinus1: group14.pMinus1, g: group14.g, p: group14.p, pMinus1: group14.pMinus1,
hashFunc: crypto.SHA256, hashFunc: crypto.SHA256,
} }
// This is the group called diffie-hellman-group16-sha512 in RFC p, _ = new(big.Int).SetString(oakleyGroup16, 16)
// 8268 and Oakley Group 16 in RFC 3526.
p, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199FFFFFFFFFFFFFFFF", 16)
kexAlgoMap[kexAlgoDH16SHA512] = &dhGroup{ kexAlgoMap[KeyExchangeDH16SHA512] = &dhGroup{
g: new(big.Int).SetInt64(2), g: new(big.Int).SetInt64(2),
p: p, p: p,
pMinus1: new(big.Int).Sub(p, bigOne), pMinus1: new(big.Int).Sub(p, bigOne),
hashFunc: crypto.SHA512, hashFunc: crypto.SHA512,
} }
kexAlgoMap[kexAlgoECDH521] = &ecdh{elliptic.P521()} kexAlgoMap[KeyExchangeECDHP521] = &ecdh{elliptic.P521()}
kexAlgoMap[kexAlgoECDH384] = &ecdh{elliptic.P384()} kexAlgoMap[KeyExchangeECDHP384] = &ecdh{elliptic.P384()}
kexAlgoMap[kexAlgoECDH256] = &ecdh{elliptic.P256()} kexAlgoMap[KeyExchangeECDHP256] = &ecdh{elliptic.P256()}
kexAlgoMap[kexAlgoCurve25519SHA256] = &curve25519sha256{} kexAlgoMap[KeyExchangeCurve25519] = &curve25519sha256{}
kexAlgoMap[kexAlgoCurve25519SHA256LibSSH] = &curve25519sha256{} kexAlgoMap[keyExchangeCurve25519LibSSH] = &curve25519sha256{}
kexAlgoMap[kexAlgoDHGEXSHA1] = &dhGEXSHA{hashFunc: crypto.SHA1} kexAlgoMap[InsecureKeyExchangeDHGEXSHA1] = &dhGEXSHA{hashFunc: crypto.SHA1}
kexAlgoMap[kexAlgoDHGEXSHA256] = &dhGEXSHA{hashFunc: crypto.SHA256} kexAlgoMap[KeyExchangeDHGEXSHA256] = &dhGEXSHA{hashFunc: crypto.SHA256}
} }
// curve25519sha256 implements the curve25519-sha256 (formerly known as // curve25519sha256 implements the curve25519-sha256 (formerly known as
@ -601,9 +591,9 @@ const (
func (gex *dhGEXSHA) Client(c packetConn, randSource io.Reader, magics *handshakeMagics) (*kexResult, error) { func (gex *dhGEXSHA) Client(c packetConn, randSource io.Reader, magics *handshakeMagics) (*kexResult, error) {
// Send GexRequest // Send GexRequest
kexDHGexRequest := kexDHGexRequestMsg{ kexDHGexRequest := kexDHGexRequestMsg{
MinBits: dhGroupExchangeMinimumBits, MinBits: dhGroupExchangeMinimumBits,
PreferedBits: dhGroupExchangePreferredBits, PreferredBits: dhGroupExchangePreferredBits,
MaxBits: dhGroupExchangeMaximumBits, MaxBits: dhGroupExchangeMaximumBits,
} }
if err := c.writePacket(Marshal(&kexDHGexRequest)); err != nil { if err := c.writePacket(Marshal(&kexDHGexRequest)); err != nil {
return nil, err return nil, err
@ -690,9 +680,7 @@ func (gex *dhGEXSHA) Client(c packetConn, randSource io.Reader, magics *handshak
} }
// Server half implementation of the Diffie Hellman Key Exchange with SHA1 and SHA256. // Server half implementation of the Diffie Hellman Key Exchange with SHA1 and SHA256.
// func (gex *dhGEXSHA) Server(c packetConn, randSource io.Reader, magics *handshakeMagics, priv AlgorithmSigner, algo string) (result *kexResult, err error) {
// This is a minimal implementation to satisfy the automated tests.
func (gex dhGEXSHA) Server(c packetConn, randSource io.Reader, magics *handshakeMagics, priv AlgorithmSigner, algo string) (result *kexResult, err error) {
// Receive GexRequest // Receive GexRequest
packet, err := c.readPacket() packet, err := c.readPacket()
if err != nil { if err != nil {
@ -702,13 +690,32 @@ func (gex dhGEXSHA) Server(c packetConn, randSource io.Reader, magics *handshake
if err = Unmarshal(packet, &kexDHGexRequest); err != nil { if err = Unmarshal(packet, &kexDHGexRequest); err != nil {
return return
} }
// We check that the request received is valid and that the MaxBits
// requested are at least equal to our supported minimum. This is the same
// check done in OpenSSH:
// https://github.com/openssh/openssh-portable/blob/80a2f64b/kexgexs.c#L94
//
// Furthermore, we also check that the required MinBits are less than or
// equal to 4096 because we can use up to Oakley Group 16.
if kexDHGexRequest.MaxBits < kexDHGexRequest.MinBits || kexDHGexRequest.PreferredBits < kexDHGexRequest.MinBits ||
kexDHGexRequest.MaxBits < kexDHGexRequest.PreferredBits || kexDHGexRequest.MaxBits < dhGroupExchangeMinimumBits ||
kexDHGexRequest.MinBits > 4096 {
return nil, fmt.Errorf("ssh: DH GEX request out of range, min: %d, max: %d, preferred: %d", kexDHGexRequest.MinBits,
kexDHGexRequest.MaxBits, kexDHGexRequest.PreferredBits)
}
var p *big.Int
// We hardcode sending Oakley Group 14 (2048 bits), Oakley Group 15 (3072
// bits) or Oakley Group 16 (4096 bits), based on the requested max size.
if kexDHGexRequest.MaxBits < 3072 {
p, _ = new(big.Int).SetString(oakleyGroup14, 16)
} else if kexDHGexRequest.MaxBits < 4096 {
p, _ = new(big.Int).SetString(oakleyGroup15, 16)
} else {
p, _ = new(big.Int).SetString(oakleyGroup16, 16)
}
// Send GexGroup
// This is the group called diffie-hellman-group14-sha1 in RFC
// 4253 and Oakley Group 14 in RFC 3526.
p, _ := new(big.Int).SetString("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AACAA68FFFFFFFFFFFFFFFF", 16)
g := big.NewInt(2) g := big.NewInt(2)
msg := &kexDHGexGroupMsg{ msg := &kexDHGexGroupMsg{
P: p, P: p,
G: g, G: g,
@ -746,9 +753,9 @@ func (gex dhGEXSHA) Server(c packetConn, randSource io.Reader, magics *handshake
h := gex.hashFunc.New() h := gex.hashFunc.New()
magics.write(h) magics.write(h)
writeString(h, hostKeyBytes) writeString(h, hostKeyBytes)
binary.Write(h, binary.BigEndian, uint32(dhGroupExchangeMinimumBits)) binary.Write(h, binary.BigEndian, kexDHGexRequest.MinBits)
binary.Write(h, binary.BigEndian, uint32(dhGroupExchangePreferredBits)) binary.Write(h, binary.BigEndian, kexDHGexRequest.PreferredBits)
binary.Write(h, binary.BigEndian, uint32(dhGroupExchangeMaximumBits)) binary.Write(h, binary.BigEndian, kexDHGexRequest.MaxBits)
writeInt(h, p) writeInt(h, p)
writeInt(h, g) writeInt(h, g)
writeInt(h, kexDHGexInit.X) writeInt(h, kexDHGexInit.X)

View File

@ -36,14 +36,19 @@ import (
// ClientConfig.HostKeyAlgorithms, Signature.Format, or as AlgorithmSigner // ClientConfig.HostKeyAlgorithms, Signature.Format, or as AlgorithmSigner
// arguments. // arguments.
const ( const (
KeyAlgoRSA = "ssh-rsa" KeyAlgoRSA = "ssh-rsa"
KeyAlgoDSA = "ssh-dss" // Deprecated: DSA is only supported at insecure key sizes, and was removed
KeyAlgoECDSA256 = "ecdsa-sha2-nistp256" // from major implementations.
KeyAlgoSKECDSA256 = "sk-ecdsa-sha2-nistp256@openssh.com" KeyAlgoDSA = InsecureKeyAlgoDSA
KeyAlgoECDSA384 = "ecdsa-sha2-nistp384" // Deprecated: DSA is only supported at insecure key sizes, and was removed
KeyAlgoECDSA521 = "ecdsa-sha2-nistp521" // from major implementations.
KeyAlgoED25519 = "ssh-ed25519" InsecureKeyAlgoDSA = "ssh-dss"
KeyAlgoSKED25519 = "sk-ssh-ed25519@openssh.com" KeyAlgoECDSA256 = "ecdsa-sha2-nistp256"
KeyAlgoSKECDSA256 = "sk-ecdsa-sha2-nistp256@openssh.com"
KeyAlgoECDSA384 = "ecdsa-sha2-nistp384"
KeyAlgoECDSA521 = "ecdsa-sha2-nistp521"
KeyAlgoED25519 = "ssh-ed25519"
KeyAlgoSKED25519 = "sk-ssh-ed25519@openssh.com"
// KeyAlgoRSASHA256 and KeyAlgoRSASHA512 are only public key algorithms, not // KeyAlgoRSASHA256 and KeyAlgoRSASHA512 are only public key algorithms, not
// public key formats, so they can't appear as a PublicKey.Type. The // public key formats, so they can't appear as a PublicKey.Type. The
@ -67,7 +72,7 @@ func parsePubKey(in []byte, algo string) (pubKey PublicKey, rest []byte, err err
switch algo { switch algo {
case KeyAlgoRSA: case KeyAlgoRSA:
return parseRSA(in) return parseRSA(in)
case KeyAlgoDSA: case InsecureKeyAlgoDSA:
return parseDSA(in) return parseDSA(in)
case KeyAlgoECDSA256, KeyAlgoECDSA384, KeyAlgoECDSA521: case KeyAlgoECDSA256, KeyAlgoECDSA384, KeyAlgoECDSA521:
return parseECDSA(in) return parseECDSA(in)
@ -77,7 +82,7 @@ func parsePubKey(in []byte, algo string) (pubKey PublicKey, rest []byte, err err
return parseED25519(in) return parseED25519(in)
case KeyAlgoSKED25519: case KeyAlgoSKED25519:
return parseSKEd25519(in) return parseSKEd25519(in)
case CertAlgoRSAv01, CertAlgoDSAv01, CertAlgoECDSA256v01, CertAlgoECDSA384v01, CertAlgoECDSA521v01, CertAlgoSKECDSA256v01, CertAlgoED25519v01, CertAlgoSKED25519v01: case CertAlgoRSAv01, InsecureCertAlgoDSAv01, CertAlgoECDSA256v01, CertAlgoECDSA384v01, CertAlgoECDSA521v01, CertAlgoSKECDSA256v01, CertAlgoED25519v01, CertAlgoSKED25519v01:
cert, err := parseCert(in, certKeyAlgoNames[algo]) cert, err := parseCert(in, certKeyAlgoNames[algo])
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err

View File

@ -47,22 +47,22 @@ func (t truncatingMAC) Size() int {
func (t truncatingMAC) BlockSize() int { return t.hmac.BlockSize() } func (t truncatingMAC) BlockSize() int { return t.hmac.BlockSize() }
var macModes = map[string]*macMode{ var macModes = map[string]*macMode{
"hmac-sha2-512-etm@openssh.com": {64, true, func(key []byte) hash.Hash { HMACSHA512ETM: {64, true, func(key []byte) hash.Hash {
return hmac.New(sha512.New, key) return hmac.New(sha512.New, key)
}}, }},
"hmac-sha2-256-etm@openssh.com": {32, true, func(key []byte) hash.Hash { HMACSHA256ETM: {32, true, func(key []byte) hash.Hash {
return hmac.New(sha256.New, key) return hmac.New(sha256.New, key)
}}, }},
"hmac-sha2-512": {64, false, func(key []byte) hash.Hash { HMACSHA512: {64, false, func(key []byte) hash.Hash {
return hmac.New(sha512.New, key) return hmac.New(sha512.New, key)
}}, }},
"hmac-sha2-256": {32, false, func(key []byte) hash.Hash { HMACSHA256: {32, false, func(key []byte) hash.Hash {
return hmac.New(sha256.New, key) return hmac.New(sha256.New, key)
}}, }},
"hmac-sha1": {20, false, func(key []byte) hash.Hash { HMACSHA1: {20, false, func(key []byte) hash.Hash {
return hmac.New(sha1.New, key) return hmac.New(sha1.New, key)
}}, }},
"hmac-sha1-96": {20, false, func(key []byte) hash.Hash { InsecureHMACSHA196: {20, false, func(key []byte) hash.Hash {
return truncatingMAC{12, hmac.New(sha1.New, key)} return truncatingMAC{12, hmac.New(sha1.New, key)}
}}, }},
} }

View File

@ -122,9 +122,9 @@ type kexDHGexReplyMsg struct {
const msgKexDHGexRequest = 34 const msgKexDHGexRequest = 34
type kexDHGexRequestMsg struct { type kexDHGexRequestMsg struct {
MinBits uint32 `sshtype:"34"` MinBits uint32 `sshtype:"34"`
PreferedBits uint32 PreferredBits uint32
MaxBits uint32 MaxBits uint32
} }
// See RFC 4253, section 10. // See RFC 4253, section 10.

View File

@ -19,19 +19,15 @@ import (
"golang.org/x/crypto/curve25519" "golang.org/x/crypto/curve25519"
) )
const (
kexAlgoMLKEM768xCurve25519SHA256 = "mlkem768x25519-sha256"
)
func init() { func init() {
// After Go 1.24rc1 mlkem swapped the order of return values of Encapsulate. // After Go 1.24rc1 mlkem swapped the order of return values of Encapsulate.
// See #70950. // See #70950.
if runtime.Version() == "go1.24rc1" { if runtime.Version() == "go1.24rc1" {
return return
} }
supportedKexAlgos = slices.Insert(supportedKexAlgos, 0, kexAlgoMLKEM768xCurve25519SHA256) supportedKexAlgos = slices.Insert(supportedKexAlgos, 0, KeyExchangeMLKEM768X25519)
preferredKexAlgos = slices.Insert(preferredKexAlgos, 0, kexAlgoMLKEM768xCurve25519SHA256) defaultKexAlgos = slices.Insert(defaultKexAlgos, 0, KeyExchangeMLKEM768X25519)
kexAlgoMap[kexAlgoMLKEM768xCurve25519SHA256] = &mlkem768WithCurve25519sha256{} kexAlgoMap[KeyExchangeMLKEM768X25519] = &mlkem768WithCurve25519sha256{}
} }
// mlkem768WithCurve25519sha256 implements the hybrid ML-KEM768 with // mlkem768WithCurve25519sha256 implements the hybrid ML-KEM768 with

View File

@ -243,22 +243,15 @@ func NewServerConn(c net.Conn, config *ServerConfig) (*ServerConn, <-chan NewCha
fullConf.MaxAuthTries = 6 fullConf.MaxAuthTries = 6
} }
if len(fullConf.PublicKeyAuthAlgorithms) == 0 { if len(fullConf.PublicKeyAuthAlgorithms) == 0 {
fullConf.PublicKeyAuthAlgorithms = supportedPubKeyAuthAlgos fullConf.PublicKeyAuthAlgorithms = defaultPubKeyAuthAlgos
} else { } else {
for _, algo := range fullConf.PublicKeyAuthAlgorithms { for _, algo := range fullConf.PublicKeyAuthAlgorithms {
if !contains(supportedPubKeyAuthAlgos, algo) { if !contains(SupportedAlgorithms().PublicKeyAuths, algo) && !contains(InsecureAlgorithms().PublicKeyAuths, algo) {
c.Close() c.Close()
return nil, nil, nil, fmt.Errorf("ssh: unsupported public key authentication algorithm %s", algo) return nil, nil, nil, fmt.Errorf("ssh: unsupported public key authentication algorithm %s", algo)
} }
} }
} }
// Check if the config contains any unsupported key exchanges
for _, kex := range fullConf.KeyExchanges {
if _, ok := serverForbiddenKexAlgos[kex]; ok {
c.Close()
return nil, nil, nil, fmt.Errorf("ssh: unsupported key exchange %s for server", kex)
}
}
s := &connection{ s := &connection{
sshConn: sshConn{conn: c}, sshConn: sshConn{conn: c},
@ -315,6 +308,7 @@ func (s *connection) serverHandshake(config *ServerConfig) (*Permissions, error)
// We just did the key change, so the session ID is established. // We just did the key change, so the session ID is established.
s.sessionID = s.transport.getSessionID() s.sessionID = s.transport.getSessionID()
s.algorithms = s.transport.getAlgorithms()
var packet []byte var packet []byte
if packet, err = s.transport.readPacket(); err != nil { if packet, err = s.transport.readPacket(); err != nil {

View File

@ -16,13 +16,6 @@ import (
// wire. No message decoding is done, to minimize the impact on timing. // wire. No message decoding is done, to minimize the impact on timing.
const debugTransport = false const debugTransport = false
const (
gcm128CipherID = "aes128-gcm@openssh.com"
gcm256CipherID = "aes256-gcm@openssh.com"
aes128cbcID = "aes128-cbc"
tripledescbcID = "3des-cbc"
)
// packetConn represents a transport that implements packet based // packetConn represents a transport that implements packet based
// operations. // operations.
type packetConn interface { type packetConn interface {
@ -92,14 +85,14 @@ func (t *transport) setInitialKEXDone() {
// prepareKeyChange sets up key material for a keychange. The key changes in // prepareKeyChange sets up key material for a keychange. The key changes in
// both directions are triggered by reading and writing a msgNewKey packet // both directions are triggered by reading and writing a msgNewKey packet
// respectively. // respectively.
func (t *transport) prepareKeyChange(algs *algorithms, kexResult *kexResult) error { func (t *transport) prepareKeyChange(algs *NegotiatedAlgorithms, kexResult *kexResult) error {
ciph, err := newPacketCipher(t.reader.dir, algs.r, kexResult) ciph, err := newPacketCipher(t.reader.dir, algs.Read, kexResult)
if err != nil { if err != nil {
return err return err
} }
t.reader.pendingKeyChange <- ciph t.reader.pendingKeyChange <- ciph
ciph, err = newPacketCipher(t.writer.dir, algs.w, kexResult) ciph, err = newPacketCipher(t.writer.dir, algs.Write, kexResult)
if err != nil { if err != nil {
return err return err
} }
@ -259,7 +252,7 @@ var (
// setupKeys sets the cipher and MAC keys from kex.K, kex.H and sessionId, as // setupKeys sets the cipher and MAC keys from kex.K, kex.H and sessionId, as
// described in RFC 4253, section 6.4. direction should either be serverKeys // described in RFC 4253, section 6.4. direction should either be serverKeys
// (to setup server->client keys) or clientKeys (for client->server keys). // (to setup server->client keys) or clientKeys (for client->server keys).
func newPacketCipher(d direction, algs directionAlgorithms, kex *kexResult) (packetCipher, error) { func newPacketCipher(d direction, algs DirectionAlgorithms, kex *kexResult) (packetCipher, error) {
cipherMode := cipherModes[algs.Cipher] cipherMode := cipherModes[algs.Cipher]
iv := make([]byte, cipherMode.ivSize) iv := make([]byte, cipherMode.ivSize)

View File

@ -16,7 +16,7 @@ import (
"io" "io"
"os" "os"
"path/filepath" "path/filepath"
"sort" "slices"
"strings" "strings"
) )
@ -36,7 +36,7 @@ type Hash func(files []string, open func(string) (io.ReadCloser, error)) (string
// sha256sum $(find . -type f | sort) | sha256sum // sha256sum $(find . -type f | sort) | sha256sum
// //
// More precisely, the hashed summary contains a single line for each file in the list, // More precisely, the hashed summary contains a single line for each file in the list,
// ordered by sort.Strings applied to the file names, where each line consists of // ordered by [slices.Sort] applied to the file names, where each line consists of
// the hexadecimal SHA-256 hash of the file content, // the hexadecimal SHA-256 hash of the file content,
// two spaces (U+0020), the file name, and a newline (U+000A). // two spaces (U+0020), the file name, and a newline (U+000A).
// //
@ -44,7 +44,7 @@ type Hash func(files []string, open func(string) (io.ReadCloser, error)) (string
func Hash1(files []string, open func(string) (io.ReadCloser, error)) (string, error) { func Hash1(files []string, open func(string) (io.ReadCloser, error)) (string, error) {
h := sha256.New() h := sha256.New()
files = append([]string(nil), files...) files = append([]string(nil), files...)
sort.Strings(files) slices.Sort(files)
for _, file := range files { for _, file := range files {
if strings.Contains(file, "\n") { if strings.Contains(file, "\n") {
return "", errors.New("dirhash: filenames with newlines are not supported") return "", errors.New("dirhash: filenames with newlines are not supported")

View File

@ -39,7 +39,7 @@ const (
FrameContinuation FrameType = 0x9 FrameContinuation FrameType = 0x9
) )
var frameName = map[FrameType]string{ var frameNames = [...]string{
FrameData: "DATA", FrameData: "DATA",
FrameHeaders: "HEADERS", FrameHeaders: "HEADERS",
FramePriority: "PRIORITY", FramePriority: "PRIORITY",
@ -53,10 +53,10 @@ var frameName = map[FrameType]string{
} }
func (t FrameType) String() string { func (t FrameType) String() string {
if s, ok := frameName[t]; ok { if int(t) < len(frameNames) {
return s return frameNames[t]
} }
return fmt.Sprintf("UNKNOWN_FRAME_TYPE_%d", uint8(t)) return fmt.Sprintf("UNKNOWN_FRAME_TYPE_%d", t)
} }
// Flags is a bitmask of HTTP/2 flags. // Flags is a bitmask of HTTP/2 flags.
@ -124,7 +124,7 @@ var flagName = map[FrameType]map[Flags]string{
// might be 0). // might be 0).
type frameParser func(fc *frameCache, fh FrameHeader, countError func(string), payload []byte) (Frame, error) type frameParser func(fc *frameCache, fh FrameHeader, countError func(string), payload []byte) (Frame, error)
var frameParsers = map[FrameType]frameParser{ var frameParsers = [...]frameParser{
FrameData: parseDataFrame, FrameData: parseDataFrame,
FrameHeaders: parseHeadersFrame, FrameHeaders: parseHeadersFrame,
FramePriority: parsePriorityFrame, FramePriority: parsePriorityFrame,
@ -138,8 +138,8 @@ var frameParsers = map[FrameType]frameParser{
} }
func typeFrameParser(t FrameType) frameParser { func typeFrameParser(t FrameType) frameParser {
if f := frameParsers[t]; f != nil { if int(t) < len(frameParsers) {
return f return frameParsers[t]
} }
return parseUnknownFrame return parseUnknownFrame
} }
@ -509,7 +509,7 @@ func (fr *Framer) ReadFrame() (Frame, error) {
} }
if fh.Length > fr.maxReadSize { if fh.Length > fr.maxReadSize {
if fh == invalidHTTP1LookingFrameHeader() { if fh == invalidHTTP1LookingFrameHeader() {
return nil, fmt.Errorf("http2: failed reading the frame payload: %w, note that the frame header looked like an HTTP/1.1 header", err) return nil, fmt.Errorf("http2: failed reading the frame payload: %w, note that the frame header looked like an HTTP/1.1 header", ErrFrameTooLarge)
} }
return nil, ErrFrameTooLarge return nil, ErrFrameTooLarge
} }

View File

@ -508,7 +508,7 @@ const eventsHTML = `
<tr class="first"> <tr class="first">
<td class="when">{{$el.When}}</td> <td class="when">{{$el.When}}</td>
<td class="elapsed">{{$el.ElapsedTime}}</td> <td class="elapsed">{{$el.ElapsedTime}}</td>
<td>{{$el.Title}} <td>{{$el.Title}}</td>
</tr> </tr>
{{if $.Expanded}} {{if $.Expanded}}
<tr> <tr>

View File

@ -288,7 +288,7 @@ func (tf *tokenRefresher) Token() (*Token, error) {
if tf.refreshToken != tk.RefreshToken { if tf.refreshToken != tk.RefreshToken {
tf.refreshToken = tk.RefreshToken tf.refreshToken = tk.RefreshToken
} }
return tk, err return tk, nil
} }
// reuseTokenSource is a TokenSource that holds a single token in memory // reuseTokenSource is a TokenSource that holds a single token in memory
@ -356,11 +356,15 @@ func NewClient(ctx context.Context, src TokenSource) *http.Client {
if src == nil { if src == nil {
return internal.ContextClient(ctx) return internal.ContextClient(ctx)
} }
cc := internal.ContextClient(ctx)
return &http.Client{ return &http.Client{
Transport: &Transport{ Transport: &Transport{
Base: internal.ContextClient(ctx).Transport, Base: cc.Transport,
Source: ReuseTokenSource(nil, src), Source: ReuseTokenSource(nil, src),
}, },
CheckRedirect: cc.CheckRedirect,
Jar: cc.Jar,
Timeout: cc.Timeout,
} }
} }

4
vendor/golang.org/x/oauth2/pkce.go generated vendored
View File

@ -21,7 +21,7 @@ const (
// //
// A fresh verifier should be generated for each authorization. // A fresh verifier should be generated for each authorization.
// S256ChallengeOption(verifier) should then be passed to Config.AuthCodeURL // S256ChallengeOption(verifier) should then be passed to Config.AuthCodeURL
// (or Config.DeviceAccess) and VerifierOption(verifier) to Config.Exchange // (or Config.DeviceAuth) and VerifierOption(verifier) to Config.Exchange
// (or Config.DeviceAccessToken). // (or Config.DeviceAccessToken).
func GenerateVerifier() string { func GenerateVerifier() string {
// "RECOMMENDED that the output of a suitable random number generator be // "RECOMMENDED that the output of a suitable random number generator be
@ -51,7 +51,7 @@ func S256ChallengeFromVerifier(verifier string) string {
} }
// S256ChallengeOption derives a PKCE code challenge derived from verifier with // S256ChallengeOption derives a PKCE code challenge derived from verifier with
// method S256. It should be passed to Config.AuthCodeURL or Config.DeviceAccess // method S256. It should be passed to Config.AuthCodeURL or Config.DeviceAuth
// only. // only.
func S256ChallengeOption(verifier string) AuthCodeOption { func S256ChallengeOption(verifier string) AuthCodeOption {
return challengeOption{ return challengeOption{

View File

@ -10,6 +10,7 @@
// builds a list of push/pop events and their node type. Subsequent // builds a list of push/pop events and their node type. Subsequent
// method calls that request a traversal scan this list, rather than walk // method calls that request a traversal scan this list, rather than walk
// the AST, and perform type filtering using efficient bit sets. // the AST, and perform type filtering using efficient bit sets.
// This representation is sometimes called a "balanced parenthesis tree."
// //
// Experiments suggest the inspector's traversals are about 2.5x faster // Experiments suggest the inspector's traversals are about 2.5x faster
// than ast.Inspect, but it may take around 5 traversals for this // than ast.Inspect, but it may take around 5 traversals for this
@ -47,9 +48,10 @@ type Inspector struct {
events []event events []event
} }
//go:linkname events //go:linkname events golang.org/x/tools/go/ast/inspector.events
func events(in *Inspector) []event { return in.events } func events(in *Inspector) []event { return in.events }
//go:linkname packEdgeKindAndIndex golang.org/x/tools/go/ast/inspector.packEdgeKindAndIndex
func packEdgeKindAndIndex(ek edge.Kind, index int) int32 { func packEdgeKindAndIndex(ek edge.Kind, index int) int32 {
return int32(uint32(index+1)<<7 | uint32(ek)) return int32(uint32(index+1)<<7 | uint32(ek))
} }
@ -57,7 +59,7 @@ func packEdgeKindAndIndex(ek edge.Kind, index int) int32 {
// unpackEdgeKindAndIndex unpacks the edge kind and edge index (within // unpackEdgeKindAndIndex unpacks the edge kind and edge index (within
// an []ast.Node slice) from the parent field of a pop event. // an []ast.Node slice) from the parent field of a pop event.
// //
//go:linkname unpackEdgeKindAndIndex //go:linkname unpackEdgeKindAndIndex golang.org/x/tools/go/ast/inspector.unpackEdgeKindAndIndex
func unpackEdgeKindAndIndex(x int32) (edge.Kind, int) { func unpackEdgeKindAndIndex(x int32) (edge.Kind, int) {
// The "parent" field of a pop node holds the // The "parent" field of a pop node holds the
// edge Kind in the lower 7 bits and the index+1 // edge Kind in the lower 7 bits and the index+1

View File

@ -217,7 +217,7 @@ func typeOf(n ast.Node) uint64 {
return 0 return 0
} }
//go:linkname maskOf //go:linkname maskOf golang.org/x/tools/go/ast/inspector.maskOf
func maskOf(nodes []ast.Node) uint64 { func maskOf(nodes []ast.Node) uint64 {
if len(nodes) == 0 { if len(nodes) == 0 {
return math.MaxUint64 // match all node types return math.MaxUint64 // match all node types

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
// Copyright 2024 Google LLC // Copyright 2025 Google LLC
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.

View File

@ -1,73 +1,102 @@
# How to contribute # How to contribute
We definitely welcome your patches and contributions to gRPC! Please read the gRPC We welcome your patches and contributions to gRPC! Please read the gRPC
organization's [governance rules](https://github.com/grpc/grpc-community/blob/master/governance.md) organization's [governance
and [contribution guidelines](https://github.com/grpc/grpc-community/blob/master/CONTRIBUTING.md) before proceeding. rules](https://github.com/grpc/grpc-community/blob/master/governance.md) before
proceeding.
If you are new to GitHub, please start by reading [Pull Request howto](https://help.github.com/articles/about-pull-requests/) If you are new to GitHub, please start by reading [Pull Request howto](https://help.github.com/articles/about-pull-requests/)
## Legal requirements ## Legal requirements
In order to protect both you and ourselves, you will need to sign the In order to protect both you and ourselves, you will need to sign the
[Contributor License Agreement](https://identity.linuxfoundation.org/projects/cncf). [Contributor License
Agreement](https://identity.linuxfoundation.org/projects/cncf). When you create
your first PR, a link will be added as a comment that contains the steps needed
to complete this process.
## Getting Started
A great way to start is by searching through our open issues. [Unassigned issues
labeled as "help
wanted"](https://github.com/grpc/grpc-go/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20label%3A%22Status%3A%20Help%20Wanted%22%20no%3Aassignee)
are especially nice for first-time contributors, as they should be well-defined
problems that already have agreed-upon solutions.
## Code Style
We follow [Google's published Go style
guide](https://google.github.io/styleguide/go/). Note that there are three
primary documents that make up this style guide; please follow them as closely
as possible. If a reviewer recommends something that contradicts those
guidelines, there may be valid reasons to do so, but it should be rare.
## Guidelines for Pull Requests ## Guidelines for Pull Requests
How to get your contributions merged smoothly and quickly.
How to get your contributions merged smoothly and quickly:
- Create **small PRs** that are narrowly focused on **addressing a single - Create **small PRs** that are narrowly focused on **addressing a single
concern**. We often times receive PRs that are trying to fix several things at concern**. We often receive PRs that attempt to fix several things at the same
a time, but only one fix is considered acceptable, nothing gets merged and time, and if one part of the PR has a problem, that will hold up the entire
both author's & review's time is wasted. Create more PRs to address different PR.
concerns and everyone will be happy.
- If you are searching for features to work on, issues labeled [Status: Help - For **speculative changes**, consider opening an issue and discussing it
Wanted](https://github.com/grpc/grpc-go/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22Status%3A+Help+Wanted%22) first. If you are suggesting a behavioral or API change, consider starting
is a great place to start. These issues are well-documented and usually can be with a [gRFC proposal](https://github.com/grpc/proposal). Many new features
resolved with a single pull request. that are not bug fixes will require cross-language agreement.
- If you are adding a new file, make sure it has the copyright message template - If you want to fix **formatting or style**, consider whether your changes are
at the top as a comment. You can copy over the message from an existing file an obvious improvement or might be considered a personal preference. If a
and update the year. style change is based on preference, it likely will not be accepted. If it
corrects widely agreed-upon anti-patterns, then please do create a PR and
explain the benefits of the change.
- The grpc package should only depend on standard Go packages and a small number - For correcting **misspellings**, please be aware that we use some terms that
of exceptions. If your contribution introduces new dependencies which are NOT are sometimes flagged by spell checkers. As an example, "if an only if" is
in the [list](https://godoc.org/google.golang.org/grpc?imports), you need a often written as "iff". Please do not make spelling correction changes unless
discussion with gRPC-Go authors and consultants. you are certain they are misspellings.
- For speculative changes, consider opening an issue and discussing it first. If
you are suggesting a behavioral or API change, consider starting with a [gRFC
proposal](https://github.com/grpc/proposal).
- Provide a good **PR description** as a record of **what** change is being made - Provide a good **PR description** as a record of **what** change is being made
and **why** it was made. Link to a GitHub issue if it exists. and **why** it was made. Link to a GitHub issue if it exists.
- If you want to fix formatting or style, consider whether your changes are an - Maintain a **clean commit history** and use **meaningful commit messages**.
obvious improvement or might be considered a personal preference. If a style PRs with messy commit histories are difficult to review and won't be merged.
change is based on preference, it likely will not be accepted. If it corrects Before sending your PR, ensure your changes are based on top of the latest
widely agreed-upon anti-patterns, then please do create a PR and explain the `upstream/master` commits, and avoid rebasing in the middle of a code review.
benefits of the change. You should **never use `git push -f`** unless absolutely necessary during a
review, as it can interfere with GitHub's tracking of comments.
- Unless your PR is trivial, you should expect there will be reviewer comments
that you'll need to address before merging. We'll mark it as `Status: Requires
Reporter Clarification` if we expect you to respond to these comments in a
timely manner. If the PR remains inactive for 6 days, it will be marked as
`stale` and automatically close 7 days after that if we don't hear back from
you.
- Maintain **clean commit history** and use **meaningful commit messages**. PRs
with messy commit history are difficult to review and won't be merged. Use
`rebase -i upstream/master` to curate your commit history and/or to bring in
latest changes from master (but avoid rebasing in the middle of a code
review).
- Keep your PR up to date with upstream/master (if there are merge conflicts, we
can't really merge your change).
- **All tests need to be passing** before your change can be merged. We - **All tests need to be passing** before your change can be merged. We
recommend you **run tests locally** before creating your PR to catch breakages recommend you run tests locally before creating your PR to catch breakages
early on. early on:
- `./scripts/vet.sh` to catch vet errors
- `go test -cpu 1,4 -timeout 7m ./...` to run the tests
- `go test -race -cpu 1,4 -timeout 7m ./...` to run tests in race mode
- Exceptions to the rules can be made if there's a compelling reason for doing so. - `./scripts/vet.sh` to catch vet errors.
- `go test -cpu 1,4 -timeout 7m ./...` to run the tests.
- `go test -race -cpu 1,4 -timeout 7m ./...` to run tests in race mode.
Note that we have a multi-module repo, so `go test` commands may need to be
run from the root of each module in order to cause all tests to run.
*Alternatively*, you may find it easier to push your changes to your fork on
GitHub, which will trigger a GitHub Actions run that you can use to verify
everything is passing.
- If you are adding a new file, make sure it has the **copyright message**
template at the top as a comment. You can copy the message from an existing
file and update the year.
- The grpc package should only depend on standard Go packages and a small number
of exceptions. **If your contribution introduces new dependencies**, you will
need a discussion with gRPC-Go maintainers. A GitHub action check will run on
every PR, and will flag any transitive dependency changes from any public
package.
- Unless your PR is trivial, you should **expect reviewer comments** that you
will need to address before merging. We'll label the PR as `Status: Requires
Reporter Clarification` if we expect you to respond to these comments in a
timely manner. If the PR remains inactive for 6 days, it will be marked as
`stale`, and we will automatically close it after 7 days if we don't hear back
from you. Please feel free to ping issues or bugs if you do not get a response
within a week.
- Exceptions to the rules can be made if there's a compelling reason to do so.

View File

@ -32,6 +32,7 @@ import "google.golang.org/grpc"
- [Low-level technical docs](Documentation) from this repository - [Low-level technical docs](Documentation) from this repository
- [Performance benchmark][] - [Performance benchmark][]
- [Examples](examples) - [Examples](examples)
- [Contribution guidelines](CONTRIBUTING.md)
## FAQ ## FAQ

View File

@ -18,7 +18,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.5 // protoc-gen-go v1.36.6
// protoc v5.27.1 // protoc v5.27.1
// source: grpc/binlog/v1/binarylog.proto // source: grpc/binlog/v1/binarylog.proto
@ -858,133 +858,68 @@ func (x *Address) GetIpPort() uint32 {
var File_grpc_binlog_v1_binarylog_proto protoreflect.FileDescriptor var File_grpc_binlog_v1_binarylog_proto protoreflect.FileDescriptor
var file_grpc_binlog_v1_binarylog_proto_rawDesc = string([]byte{ const file_grpc_binlog_v1_binarylog_proto_rawDesc = "" +
0x0a, 0x1e, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x62, 0x69, 0x6e, 0x6c, 0x6f, 0x67, 0x2f, 0x76, 0x31, "\n" +
0x2f, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, "\x1egrpc/binlog/v1/binarylog.proto\x12\x11grpc.binarylog.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xbb\a\n" +
0x12, 0x11, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, "\fGrpcLogEntry\x128\n" +
0x2e, 0x76, 0x31, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, "\ttimestamp\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\ttimestamp\x12\x17\n" +
0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, "\acall_id\x18\x02 \x01(\x04R\x06callId\x125\n" +
0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, "\x17sequence_id_within_call\x18\x03 \x01(\x04R\x14sequenceIdWithinCall\x12=\n" +
0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, "\x04type\x18\x04 \x01(\x0e2).grpc.binarylog.v1.GrpcLogEntry.EventTypeR\x04type\x12>\n" +
0x72, 0x6f, 0x74, 0x6f, 0x22, 0xbb, 0x07, 0x0a, 0x0c, 0x47, 0x72, 0x70, 0x63, 0x4c, 0x6f, 0x67, "\x06logger\x18\x05 \x01(\x0e2&.grpc.binarylog.v1.GrpcLogEntry.LoggerR\x06logger\x12F\n" +
0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x38, 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, "\rclient_header\x18\x06 \x01(\v2\x1f.grpc.binarylog.v1.ClientHeaderH\x00R\fclientHeader\x12F\n" +
0x6d, 0x70, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, "\rserver_header\x18\a \x01(\v2\x1f.grpc.binarylog.v1.ServerHeaderH\x00R\fserverHeader\x126\n" +
0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, "\amessage\x18\b \x01(\v2\x1a.grpc.binarylog.v1.MessageH\x00R\amessage\x126\n" +
0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, "\atrailer\x18\t \x01(\v2\x1a.grpc.binarylog.v1.TrailerH\x00R\atrailer\x12+\n" +
0x17, 0x0a, 0x07, 0x63, 0x61, 0x6c, 0x6c, 0x5f, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, "\x11payload_truncated\x18\n" +
0x52, 0x06, 0x63, 0x61, 0x6c, 0x6c, 0x49, 0x64, 0x12, 0x35, 0x0a, 0x17, 0x73, 0x65, 0x71, 0x75, " \x01(\bR\x10payloadTruncated\x12.\n" +
0x65, 0x6e, 0x63, 0x65, 0x5f, 0x69, 0x64, 0x5f, 0x77, 0x69, 0x74, 0x68, 0x69, 0x6e, 0x5f, 0x63, "\x04peer\x18\v \x01(\v2\x1a.grpc.binarylog.v1.AddressR\x04peer\"\xf5\x01\n" +
0x61, 0x6c, 0x6c, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x14, 0x73, 0x65, 0x71, 0x75, 0x65, "\tEventType\x12\x16\n" +
0x6e, 0x63, 0x65, 0x49, 0x64, 0x57, 0x69, 0x74, 0x68, 0x69, 0x6e, 0x43, 0x61, 0x6c, 0x6c, 0x12, "\x12EVENT_TYPE_UNKNOWN\x10\x00\x12\x1c\n" +
0x3d, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x29, 0x2e, "\x18EVENT_TYPE_CLIENT_HEADER\x10\x01\x12\x1c\n" +
0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, "\x18EVENT_TYPE_SERVER_HEADER\x10\x02\x12\x1d\n" +
0x31, 0x2e, 0x47, 0x72, 0x70, 0x63, 0x4c, 0x6f, 0x67, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x2e, 0x45, "\x19EVENT_TYPE_CLIENT_MESSAGE\x10\x03\x12\x1d\n" +
0x76, 0x65, 0x6e, 0x74, 0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x3e, "\x19EVENT_TYPE_SERVER_MESSAGE\x10\x04\x12 \n" +
0x0a, 0x06, 0x6c, 0x6f, 0x67, 0x67, 0x65, 0x72, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x26, "\x1cEVENT_TYPE_CLIENT_HALF_CLOSE\x10\x05\x12\x1d\n" +
0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, "\x19EVENT_TYPE_SERVER_TRAILER\x10\x06\x12\x15\n" +
0x76, 0x31, 0x2e, 0x47, 0x72, 0x70, 0x63, 0x4c, 0x6f, 0x67, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x2e, "\x11EVENT_TYPE_CANCEL\x10\a\"B\n" +
0x4c, 0x6f, 0x67, 0x67, 0x65, 0x72, 0x52, 0x06, 0x6c, 0x6f, 0x67, 0x67, 0x65, 0x72, 0x12, 0x46, "\x06Logger\x12\x12\n" +
0x0a, 0x0d, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x5f, 0x68, 0x65, 0x61, 0x64, 0x65, 0x72, 0x18, "\x0eLOGGER_UNKNOWN\x10\x00\x12\x11\n" +
0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, "\rLOGGER_CLIENT\x10\x01\x12\x11\n" +
0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, "\rLOGGER_SERVER\x10\x02B\t\n" +
0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x48, 0x00, 0x52, 0x0c, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, "\apayload\"\xbb\x01\n" +
0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x12, 0x46, 0x0a, 0x0d, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, "\fClientHeader\x127\n" +
0x5f, 0x68, 0x65, 0x61, 0x64, 0x65, 0x72, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, "\bmetadata\x18\x01 \x01(\v2\x1b.grpc.binarylog.v1.MetadataR\bmetadata\x12\x1f\n" +
0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, "\vmethod_name\x18\x02 \x01(\tR\n" +
0x31, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x48, 0x00, "methodName\x12\x1c\n" +
0x52, 0x0c, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x12, 0x36, "\tauthority\x18\x03 \x01(\tR\tauthority\x123\n" +
0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, "\atimeout\x18\x04 \x01(\v2\x19.google.protobuf.DurationR\atimeout\"G\n" +
0x1a, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, "\fServerHeader\x127\n" +
0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x48, 0x00, 0x52, 0x07, 0x6d, "\bmetadata\x18\x01 \x01(\v2\x1b.grpc.binarylog.v1.MetadataR\bmetadata\"\xb1\x01\n" +
0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x36, 0x0a, 0x07, 0x74, 0x72, 0x61, 0x69, 0x6c, 0x65, "\aTrailer\x127\n" +
0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, "\bmetadata\x18\x01 \x01(\v2\x1b.grpc.binarylog.v1.MetadataR\bmetadata\x12\x1f\n" +
0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x54, 0x72, 0x61, 0x69, "\vstatus_code\x18\x02 \x01(\rR\n" +
0x6c, 0x65, 0x72, 0x48, 0x00, 0x52, 0x07, 0x74, 0x72, 0x61, 0x69, 0x6c, 0x65, 0x72, 0x12, 0x2b, "statusCode\x12%\n" +
0x0a, 0x11, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x5f, 0x74, 0x72, 0x75, 0x6e, 0x63, 0x61, "\x0estatus_message\x18\x03 \x01(\tR\rstatusMessage\x12%\n" +
0x74, 0x65, 0x64, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x08, 0x52, 0x10, 0x70, 0x61, 0x79, 0x6c, 0x6f, "\x0estatus_details\x18\x04 \x01(\fR\rstatusDetails\"5\n" +
0x61, 0x64, 0x54, 0x72, 0x75, 0x6e, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x2e, 0x0a, 0x04, 0x70, "\aMessage\x12\x16\n" +
0x65, 0x65, 0x72, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x72, 0x70, 0x63, "\x06length\x18\x01 \x01(\rR\x06length\x12\x12\n" +
0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x41, 0x64, "\x04data\x18\x02 \x01(\fR\x04data\"B\n" +
0x64, 0x72, 0x65, 0x73, 0x73, 0x52, 0x04, 0x70, 0x65, 0x65, 0x72, 0x22, 0xf5, 0x01, 0x0a, 0x09, "\bMetadata\x126\n" +
0x45, 0x76, 0x65, 0x6e, 0x74, 0x54, 0x79, 0x70, 0x65, 0x12, 0x16, 0x0a, 0x12, 0x45, 0x56, 0x45, "\x05entry\x18\x01 \x03(\v2 .grpc.binarylog.v1.MetadataEntryR\x05entry\"7\n" +
0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, "\rMetadataEntry\x12\x10\n" +
0x00, 0x12, 0x1c, 0x0a, 0x18, 0x45, 0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, "\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, 0x5f, 0x48, 0x45, 0x41, 0x44, 0x45, 0x52, 0x10, 0x01, 0x12, "\x05value\x18\x02 \x01(\fR\x05value\"\xb8\x01\n" +
0x1c, 0x0a, 0x18, 0x45, 0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x45, "\aAddress\x123\n" +
0x52, 0x56, 0x45, 0x52, 0x5f, 0x48, 0x45, 0x41, 0x44, 0x45, 0x52, 0x10, 0x02, 0x12, 0x1d, 0x0a, "\x04type\x18\x01 \x01(\x0e2\x1f.grpc.binarylog.v1.Address.TypeR\x04type\x12\x18\n" +
0x19, 0x45, 0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x43, 0x4c, 0x49, 0x45, "\aaddress\x18\x02 \x01(\tR\aaddress\x12\x17\n" +
0x4e, 0x54, 0x5f, 0x4d, 0x45, 0x53, 0x53, 0x41, 0x47, 0x45, 0x10, 0x03, 0x12, 0x1d, 0x0a, 0x19, "\aip_port\x18\x03 \x01(\rR\x06ipPort\"E\n" +
0x45, 0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x45, 0x52, 0x56, 0x45, "\x04Type\x12\x10\n" +
0x52, 0x5f, 0x4d, 0x45, 0x53, 0x53, 0x41, 0x47, 0x45, 0x10, 0x04, 0x12, 0x20, 0x0a, 0x1c, 0x45, "\fTYPE_UNKNOWN\x10\x00\x12\r\n" +
0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, "\tTYPE_IPV4\x10\x01\x12\r\n" +
0x5f, 0x48, 0x41, 0x4c, 0x46, 0x5f, 0x43, 0x4c, 0x4f, 0x53, 0x45, 0x10, 0x05, 0x12, 0x1d, 0x0a, "\tTYPE_IPV6\x10\x02\x12\r\n" +
0x19, 0x45, 0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x45, 0x52, 0x56, "\tTYPE_UNIX\x10\x03B\\\n" +
0x45, 0x52, 0x5f, 0x54, 0x52, 0x41, 0x49, 0x4c, 0x45, 0x52, 0x10, 0x06, 0x12, 0x15, 0x0a, 0x11, "\x14io.grpc.binarylog.v1B\x0eBinaryLogProtoP\x01Z2google.golang.org/grpc/binarylog/grpc_binarylog_v1b\x06proto3"
0x45, 0x56, 0x45, 0x4e, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x43, 0x41, 0x4e, 0x43, 0x45,
0x4c, 0x10, 0x07, 0x22, 0x42, 0x0a, 0x06, 0x4c, 0x6f, 0x67, 0x67, 0x65, 0x72, 0x12, 0x12, 0x0a,
0x0e, 0x4c, 0x4f, 0x47, 0x47, 0x45, 0x52, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10,
0x00, 0x12, 0x11, 0x0a, 0x0d, 0x4c, 0x4f, 0x47, 0x47, 0x45, 0x52, 0x5f, 0x43, 0x4c, 0x49, 0x45,
0x4e, 0x54, 0x10, 0x01, 0x12, 0x11, 0x0a, 0x0d, 0x4c, 0x4f, 0x47, 0x47, 0x45, 0x52, 0x5f, 0x53,
0x45, 0x52, 0x56, 0x45, 0x52, 0x10, 0x02, 0x42, 0x09, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f,
0x61, 0x64, 0x22, 0xbb, 0x01, 0x0a, 0x0c, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x48, 0x65, 0x61,
0x64, 0x65, 0x72, 0x12, 0x37, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18,
0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1b, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e,
0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61,
0x74, 0x61, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x1f, 0x0a, 0x0b,
0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x0a, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x1c, 0x0a,
0x09, 0x61, 0x75, 0x74, 0x68, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09,
0x52, 0x09, 0x61, 0x75, 0x74, 0x68, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x33, 0x0a, 0x07, 0x74,
0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67,
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44,
0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x07, 0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74,
0x22, 0x47, 0x0a, 0x0c, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72,
0x12, 0x37, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x1b, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79,
0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52,
0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0xb1, 0x01, 0x0a, 0x07, 0x54, 0x72,
0x61, 0x69, 0x6c, 0x65, 0x72, 0x12, 0x37, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74,
0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1b, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62,
0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x74, 0x61,
0x64, 0x61, 0x74, 0x61, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x1f,
0x0a, 0x0b, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0d, 0x52, 0x0a, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x43, 0x6f, 0x64, 0x65, 0x12,
0x25, 0x0a, 0x0e, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67,
0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x4d,
0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x25, 0x0a, 0x0e, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73,
0x5f, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0d,
0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x22, 0x35, 0x0a,
0x07, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x6c, 0x65, 0x6e, 0x67,
0x74, 0x68, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x6c, 0x65, 0x6e, 0x67, 0x74, 0x68,
0x12, 0x12, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x04,
0x64, 0x61, 0x74, 0x61, 0x22, 0x42, 0x0a, 0x08, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61,
0x12, 0x36, 0x0a, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x20, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67,
0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x45, 0x6e, 0x74, 0x72,
0x79, 0x52, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x22, 0x37, 0x0a, 0x0d, 0x4d, 0x65, 0x74, 0x61,
0x64, 0x61, 0x74, 0x61, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76,
0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75,
0x65, 0x22, 0xb8, 0x01, 0x0a, 0x07, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x33, 0x0a,
0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x1f, 0x2e, 0x67, 0x72,
0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e,
0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x2e, 0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79,
0x70, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x02, 0x20,
0x01, 0x28, 0x09, 0x52, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x17, 0x0a, 0x07,
0x69, 0x70, 0x5f, 0x70, 0x6f, 0x72, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x69,
0x70, 0x50, 0x6f, 0x72, 0x74, 0x22, 0x45, 0x0a, 0x04, 0x54, 0x79, 0x70, 0x65, 0x12, 0x10, 0x0a,
0x0c, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, 0x00, 0x12,
0x0d, 0x0a, 0x09, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x49, 0x50, 0x56, 0x34, 0x10, 0x01, 0x12, 0x0d,
0x0a, 0x09, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x49, 0x50, 0x56, 0x36, 0x10, 0x02, 0x12, 0x0d, 0x0a,
0x09, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x49, 0x58, 0x10, 0x03, 0x42, 0x5c, 0x0a, 0x14,
0x69, 0x6f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x62, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f,
0x67, 0x2e, 0x76, 0x31, 0x42, 0x0e, 0x42, 0x69, 0x6e, 0x61, 0x72, 0x79, 0x4c, 0x6f, 0x67, 0x50,
0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x32, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67,
0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x62,
0x69, 0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x62, 0x69,
0x6e, 0x61, 0x72, 0x79, 0x6c, 0x6f, 0x67, 0x5f, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
})
var ( var (
file_grpc_binlog_v1_binarylog_proto_rawDescOnce sync.Once file_grpc_binlog_v1_binarylog_proto_rawDescOnce sync.Once

View File

@ -689,22 +689,31 @@ func (cc *ClientConn) Connect() {
cc.mu.Unlock() cc.mu.Unlock()
} }
// waitForResolvedAddrs blocks until the resolver has provided addresses or the // waitForResolvedAddrs blocks until the resolver provides addresses or the
// context expires. Returns nil unless the context expires first; otherwise // context expires, whichever happens first.
// returns a status error based on the context. //
func (cc *ClientConn) waitForResolvedAddrs(ctx context.Context) error { // Error is nil unless the context expires first; otherwise returns a status
// error based on the context.
//
// The returned boolean indicates whether it did block or not. If the
// resolution has already happened once before, it returns false without
// blocking. Otherwise, it wait for the resolution and return true if
// resolution has succeeded or return false along with error if resolution has
// failed.
func (cc *ClientConn) waitForResolvedAddrs(ctx context.Context) (bool, error) {
// This is on the RPC path, so we use a fast path to avoid the // This is on the RPC path, so we use a fast path to avoid the
// more-expensive "select" below after the resolver has returned once. // more-expensive "select" below after the resolver has returned once.
if cc.firstResolveEvent.HasFired() { if cc.firstResolveEvent.HasFired() {
return nil return false, nil
} }
internal.NewStreamWaitingForResolver()
select { select {
case <-cc.firstResolveEvent.Done(): case <-cc.firstResolveEvent.Done():
return nil return true, nil
case <-ctx.Done(): case <-ctx.Done():
return status.FromContextError(ctx.Err()).Err() return false, status.FromContextError(ctx.Err()).Err()
case <-cc.ctx.Done(): case <-cc.ctx.Done():
return ErrClientConnClosing return false, ErrClientConnClosing
} }
} }

View File

@ -120,6 +120,20 @@ type AuthInfo interface {
AuthType() string AuthType() string
} }
// AuthorityValidator validates the authority used to override the `:authority`
// header. This is an optional interface that implementations of AuthInfo can
// implement if they support per-RPC authority overrides. It is invoked when the
// application attempts to override the HTTP/2 `:authority` header using the
// CallAuthority call option.
type AuthorityValidator interface {
// ValidateAuthority checks the authority value used to override the
// `:authority` header. The authority parameter is the override value
// provided by the application via the CallAuthority option. This value
// typically corresponds to the server hostname or endpoint the RPC is
// targeting. It returns non-nil error if the validation fails.
ValidateAuthority(authority string) error
}
// ErrConnDispatched indicates that rawConn has been dispatched out of gRPC // ErrConnDispatched indicates that rawConn has been dispatched out of gRPC
// and the caller should not close rawConn. // and the caller should not close rawConn.
var ErrConnDispatched = errors.New("credentials: rawConn is dispatched out of gRPC") var ErrConnDispatched = errors.New("credentials: rawConn is dispatched out of gRPC")
@ -207,14 +221,32 @@ type RequestInfo struct {
AuthInfo AuthInfo AuthInfo AuthInfo
} }
// requestInfoKey is a struct to be used as the key to store RequestInfo in a
// context.
type requestInfoKey struct{}
// RequestInfoFromContext extracts the RequestInfo from the context if it exists. // RequestInfoFromContext extracts the RequestInfo from the context if it exists.
// //
// This API is experimental. // This API is experimental.
func RequestInfoFromContext(ctx context.Context) (ri RequestInfo, ok bool) { func RequestInfoFromContext(ctx context.Context) (ri RequestInfo, ok bool) {
ri, ok = icredentials.RequestInfoFromContext(ctx).(RequestInfo) ri, ok = ctx.Value(requestInfoKey{}).(RequestInfo)
return ri, ok return ri, ok
} }
// NewContextWithRequestInfo creates a new context from ctx and attaches ri to it.
//
// This RequestInfo will be accessible via RequestInfoFromContext.
//
// Intended to be used from tests for PerRPCCredentials implementations (that
// often need to check connection's SecurityLevel). Should not be used from
// non-test code: the gRPC client already prepares a context with the correct
// RequestInfo attached when calling PerRPCCredentials.GetRequestMetadata.
//
// This API is experimental.
func NewContextWithRequestInfo(ctx context.Context, ri RequestInfo) context.Context {
return context.WithValue(ctx, requestInfoKey{}, ri)
}
// ClientHandshakeInfo holds data to be passed to ClientHandshake. This makes // ClientHandshakeInfo holds data to be passed to ClientHandshake. This makes
// it possible to pass arbitrary data to the handshaker from gRPC, resolver, // it possible to pass arbitrary data to the handshaker from gRPC, resolver,
// balancer etc. Individual credential implementations control the actual // balancer etc. Individual credential implementations control the actual

View File

@ -30,7 +30,7 @@ import (
// NewCredentials returns a credentials which disables transport security. // NewCredentials returns a credentials which disables transport security.
// //
// Note that using this credentials with per-RPC credentials which require // Note that using this credentials with per-RPC credentials which require
// transport security is incompatible and will cause grpc.Dial() to fail. // transport security is incompatible and will cause RPCs to fail.
func NewCredentials() credentials.TransportCredentials { func NewCredentials() credentials.TransportCredentials {
return insecureTC{} return insecureTC{}
} }
@ -71,6 +71,12 @@ func (info) AuthType() string {
return "insecure" return "insecure"
} }
// ValidateAuthority allows any value to be overridden for the :authority
// header.
func (info) ValidateAuthority(string) error {
return nil
}
// insecureBundle implements an insecure bundle. // insecureBundle implements an insecure bundle.
// An insecure bundle provides a thin wrapper around insecureTC to support // An insecure bundle provides a thin wrapper around insecureTC to support
// the credentials.Bundle interface. // the credentials.Bundle interface.

View File

@ -22,6 +22,7 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"crypto/x509" "crypto/x509"
"errors"
"fmt" "fmt"
"net" "net"
"net/url" "net/url"
@ -50,6 +51,21 @@ func (t TLSInfo) AuthType() string {
return "tls" return "tls"
} }
// ValidateAuthority validates the provided authority being used to override the
// :authority header by verifying it against the peer certificates. It returns a
// non-nil error if the validation fails.
func (t TLSInfo) ValidateAuthority(authority string) error {
var errs []error
for _, cert := range t.State.PeerCertificates {
var err error
if err = cert.VerifyHostname(authority); err == nil {
return nil
}
errs = append(errs, err)
}
return fmt.Errorf("credentials: invalid authority %q: %v", authority, errors.Join(errs...))
}
// cipherSuiteLookup returns the string version of a TLS cipher suite ID. // cipherSuiteLookup returns the string version of a TLS cipher suite ID.
func cipherSuiteLookup(cipherSuiteID uint16) string { func cipherSuiteLookup(cipherSuiteID uint16) string {
for _, s := range tls.CipherSuites() { for _, s := range tls.CipherSuites() {

View File

@ -360,7 +360,7 @@ func WithReturnConnectionError() DialOption {
// //
// Note that using this DialOption with per-RPC credentials (through // Note that using this DialOption with per-RPC credentials (through
// WithCredentialsBundle or WithPerRPCCredentials) which require transport // WithCredentialsBundle or WithPerRPCCredentials) which require transport
// security is incompatible and will cause grpc.Dial() to fail. // security is incompatible and will cause RPCs to fail.
// //
// Deprecated: use WithTransportCredentials and insecure.NewCredentials() // Deprecated: use WithTransportCredentials and insecure.NewCredentials()
// instead. Will be supported throughout 1.x. // instead. Will be supported throughout 1.x.

View File

@ -17,7 +17,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.36.5 // protoc-gen-go v1.36.6
// protoc v5.27.1 // protoc v5.27.1
// source: grpc/health/v1/health.proto // source: grpc/health/v1/health.proto
@ -261,63 +261,29 @@ func (x *HealthListResponse) GetStatuses() map[string]*HealthCheckResponse {
var File_grpc_health_v1_health_proto protoreflect.FileDescriptor var File_grpc_health_v1_health_proto protoreflect.FileDescriptor
var file_grpc_health_v1_health_proto_rawDesc = string([]byte{ const file_grpc_health_v1_health_proto_rawDesc = "" +
0x0a, 0x1b, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2f, 0x76, 0x31, "\n" +
0x2f, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0e, 0x67, "\x1bgrpc/health/v1/health.proto\x12\x0egrpc.health.v1\".\n" +
0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x22, 0x2e, 0x0a, "\x12HealthCheckRequest\x12\x18\n" +
0x12, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x52, 0x65, 0x71, 0x75, "\aservice\x18\x01 \x01(\tR\aservice\"\xb1\x01\n" +
0x65, 0x73, 0x74, 0x12, 0x18, 0x0a, 0x07, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x01, "\x13HealthCheckResponse\x12I\n" +
0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x22, 0xb1, 0x01, "\x06status\x18\x01 \x01(\x0e21.grpc.health.v1.HealthCheckResponse.ServingStatusR\x06status\"O\n" +
0x0a, 0x13, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x52, 0x65, 0x73, "\rServingStatus\x12\v\n" +
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x49, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, "\aUNKNOWN\x10\x00\x12\v\n" +
0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x31, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, "\aSERVING\x10\x01\x12\x0f\n" +
0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, "\vNOT_SERVING\x10\x02\x12\x13\n" +
0x63, 0x6b, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, "\x0fSERVICE_UNKNOWN\x10\x03\"\x13\n" +
0x6e, 0x67, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, "\x11HealthListRequest\"\xc4\x01\n" +
0x22, 0x4f, 0x0a, 0x0d, 0x53, 0x65, 0x72, 0x76, 0x69, 0x6e, 0x67, 0x53, 0x74, 0x61, 0x74, 0x75, "\x12HealthListResponse\x12L\n" +
0x73, 0x12, 0x0b, 0x0a, 0x07, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, 0x00, 0x12, 0x0b, "\bstatuses\x18\x01 \x03(\v20.grpc.health.v1.HealthListResponse.StatusesEntryR\bstatuses\x1a`\n" +
0x0a, 0x07, 0x53, 0x45, 0x52, 0x56, 0x49, 0x4e, 0x47, 0x10, 0x01, 0x12, 0x0f, 0x0a, 0x0b, 0x4e, "\rStatusesEntry\x12\x10\n" +
0x4f, 0x54, 0x5f, 0x53, 0x45, 0x52, 0x56, 0x49, 0x4e, 0x47, 0x10, 0x02, 0x12, 0x13, 0x0a, 0x0f, "\x03key\x18\x01 \x01(\tR\x03key\x129\n" +
0x53, 0x45, 0x52, 0x56, 0x49, 0x43, 0x45, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, "\x05value\x18\x02 \x01(\v2#.grpc.health.v1.HealthCheckResponseR\x05value:\x028\x012\xfd\x01\n" +
0x03, 0x22, 0x13, 0x0a, 0x11, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x4c, 0x69, 0x73, 0x74, 0x52, "\x06Health\x12P\n" +
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0xc4, 0x01, 0x0a, 0x12, 0x48, 0x65, 0x61, 0x6c, 0x74, "\x05Check\x12\".grpc.health.v1.HealthCheckRequest\x1a#.grpc.health.v1.HealthCheckResponse\x12M\n" +
0x68, 0x4c, 0x69, 0x73, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x4c, 0x0a, "\x04List\x12!.grpc.health.v1.HealthListRequest\x1a\".grpc.health.v1.HealthListResponse\x12R\n" +
0x08, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, "\x05Watch\x12\".grpc.health.v1.HealthCheckRequest\x1a#.grpc.health.v1.HealthCheckResponse0\x01Bp\n" +
0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, "\x11io.grpc.health.v1B\vHealthProtoP\x01Z,google.golang.org/grpc/health/grpc_health_v1\xa2\x02\fGrpcHealthV1\xaa\x02\x0eGrpc.Health.V1b\x06proto3"
0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x4c, 0x69, 0x73, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f,
0x6e, 0x73, 0x65, 0x2e, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72,
0x79, 0x52, 0x08, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x65, 0x73, 0x1a, 0x60, 0x0a, 0x0d, 0x53,
0x74, 0x61, 0x74, 0x75, 0x73, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03,
0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x39,
0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x23, 0x2e,
0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x2e, 0x48,
0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x32, 0xfd, 0x01,
0x0a, 0x06, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x12, 0x50, 0x0a, 0x05, 0x43, 0x68, 0x65, 0x63,
0x6b, 0x12, 0x22, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e,
0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x23, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61,
0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65,
0x63, 0x6b, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x4d, 0x0a, 0x04, 0x4c, 0x69,
0x73, 0x74, 0x12, 0x21, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68,
0x2e, 0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x4c, 0x69, 0x73, 0x74, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61,
0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x4c, 0x69, 0x73,
0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x52, 0x0a, 0x05, 0x57, 0x61, 0x74,
0x63, 0x68, 0x12, 0x22, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68,
0x2e, 0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x23, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65,
0x61, 0x6c, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68,
0x65, 0x63, 0x6b, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x30, 0x01, 0x42, 0x70, 0x0a,
0x11, 0x69, 0x6f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e,
0x76, 0x31, 0x42, 0x0b, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50,
0x01, 0x5a, 0x2c, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67,
0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68,
0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x68, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x5f, 0x76, 0x31, 0xa2,
0x02, 0x0c, 0x47, 0x72, 0x70, 0x63, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x56, 0x31, 0xaa, 0x02,
0x0e, 0x47, 0x72, 0x70, 0x63, 0x2e, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x2e, 0x56, 0x31, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
})
var ( var (
file_grpc_health_v1_health_proto_rawDescOnce sync.Once file_grpc_health_v1_health_proto_rawDescOnce sync.Once

View File

@ -20,20 +20,6 @@ import (
"context" "context"
) )
// requestInfoKey is a struct to be used as the key to store RequestInfo in a
// context.
type requestInfoKey struct{}
// NewRequestInfoContext creates a context with ri.
func NewRequestInfoContext(ctx context.Context, ri any) context.Context {
return context.WithValue(ctx, requestInfoKey{}, ri)
}
// RequestInfoFromContext extracts the RequestInfo from ctx.
func RequestInfoFromContext(ctx context.Context) any {
return ctx.Value(requestInfoKey{})
}
// clientHandshakeInfoKey is a struct used as the key to store // clientHandshakeInfoKey is a struct used as the key to store
// ClientHandshakeInfo in a context. // ClientHandshakeInfo in a context.
type clientHandshakeInfoKey struct{} type clientHandshakeInfoKey struct{}

View File

@ -36,7 +36,7 @@ var (
// LeastRequestLB is set if we should support the least_request_experimental // LeastRequestLB is set if we should support the least_request_experimental
// LB policy, which can be enabled by setting the environment variable // LB policy, which can be enabled by setting the environment variable
// "GRPC_EXPERIMENTAL_ENABLE_LEAST_REQUEST" to "true". // "GRPC_EXPERIMENTAL_ENABLE_LEAST_REQUEST" to "true".
LeastRequestLB = boolFromEnv("GRPC_EXPERIMENTAL_ENABLE_LEAST_REQUEST", false) LeastRequestLB = boolFromEnv("GRPC_EXPERIMENTAL_ENABLE_LEAST_REQUEST", true)
// ALTSMaxConcurrentHandshakes is the maximum number of concurrent ALTS // ALTSMaxConcurrentHandshakes is the maximum number of concurrent ALTS
// handshakes that can be performed. // handshakes that can be performed.
ALTSMaxConcurrentHandshakes = uint64FromEnv("GRPC_ALTS_MAX_CONCURRENT_HANDSHAKES", 100, 1, 100) ALTSMaxConcurrentHandshakes = uint64FromEnv("GRPC_ALTS_MAX_CONCURRENT_HANDSHAKES", 100, 1, 100)
@ -69,6 +69,10 @@ var (
// to gRFC A76. It can be enabled by setting the environment variable // to gRFC A76. It can be enabled by setting the environment variable
// "GRPC_EXPERIMENTAL_RING_HASH_SET_REQUEST_HASH_KEY" to "true". // "GRPC_EXPERIMENTAL_RING_HASH_SET_REQUEST_HASH_KEY" to "true".
RingHashSetRequestHashKey = boolFromEnv("GRPC_EXPERIMENTAL_RING_HASH_SET_REQUEST_HASH_KEY", false) RingHashSetRequestHashKey = boolFromEnv("GRPC_EXPERIMENTAL_RING_HASH_SET_REQUEST_HASH_KEY", false)
// ALTSHandshakerKeepaliveParams is set if we should add the
// KeepaliveParams when dial the ALTS handshaker service.
ALTSHandshakerKeepaliveParams = boolFromEnv("GRPC_EXPERIMENTAL_ALTS_HANDSHAKER_KEEPALIVE_PARAMS", false)
) )
func boolFromEnv(envVar string, def bool) bool { func boolFromEnv(envVar string, def bool) bool {

View File

@ -63,4 +63,9 @@ var (
// For more details, see: // For more details, see:
// https://github.com/grpc/proposal/blob/master/A82-xds-system-root-certs.md. // https://github.com/grpc/proposal/blob/master/A82-xds-system-root-certs.md.
XDSSystemRootCertsEnabled = boolFromEnv("GRPC_EXPERIMENTAL_XDS_SYSTEM_ROOT_CERTS", false) XDSSystemRootCertsEnabled = boolFromEnv("GRPC_EXPERIMENTAL_XDS_SYSTEM_ROOT_CERTS", false)
// XDSSPIFFEEnabled controls if SPIFFE Bundle Maps can be used as roots of
// trust. For more details, see:
// https://github.com/grpc/proposal/blob/master/A87-mtls-spiffe-support.md
XDSSPIFFEEnabled = boolFromEnv("GRPC_EXPERIMENTAL_XDS_MTLS_SPIFFE", false)
) )

View File

@ -21,28 +21,25 @@
package grpcsync package grpcsync
import ( import (
"sync"
"sync/atomic" "sync/atomic"
) )
// Event represents a one-time event that may occur in the future. // Event represents a one-time event that may occur in the future.
type Event struct { type Event struct {
fired int32 fired atomic.Bool
c chan struct{} c chan struct{}
o sync.Once
} }
// Fire causes e to complete. It is safe to call multiple times, and // Fire causes e to complete. It is safe to call multiple times, and
// concurrently. It returns true iff this call to Fire caused the signaling // concurrently. It returns true iff this call to Fire caused the signaling
// channel returned by Done to close. // channel returned by Done to close. If Fire returns false, it is possible
// the Done channel has not been closed yet.
func (e *Event) Fire() bool { func (e *Event) Fire() bool {
ret := false if e.fired.CompareAndSwap(false, true) {
e.o.Do(func() {
atomic.StoreInt32(&e.fired, 1)
close(e.c) close(e.c)
ret = true return true
}) }
return ret return false
} }
// Done returns a channel that will be closed when Fire is called. // Done returns a channel that will be closed when Fire is called.
@ -52,7 +49,7 @@ func (e *Event) Done() <-chan struct{} {
// HasFired returns true if Fire has been called. // HasFired returns true if Fire has been called.
func (e *Event) HasFired() bool { func (e *Event) HasFired() bool {
return atomic.LoadInt32(&e.fired) == 1 return e.fired.Load()
} }
// NewEvent returns a new, ready-to-use Event. // NewEvent returns a new, ready-to-use Event.

View File

@ -266,6 +266,13 @@ var (
TimeAfterFunc = func(d time.Duration, f func()) Timer { TimeAfterFunc = func(d time.Duration, f func()) Timer {
return time.AfterFunc(d, f) return time.AfterFunc(d, f)
} }
// NewStreamWaitingForResolver is a test hook that is triggered when a
// new stream blocks while waiting for name resolution. This can be
// used in tests to synchronize resolver updates and avoid race conditions.
// When set, the function will be called before the stream enters
// the blocking state.
NewStreamWaitingForResolver = func() {}
) )
// HealthChecker defines the signature of the client-side LB channel health // HealthChecker defines the signature of the client-side LB channel health

View File

@ -236,3 +236,11 @@ func IsRestrictedControlPlaneCode(s *Status) bool {
} }
return false return false
} }
// RawStatusProto returns the internal protobuf message for use by gRPC itself.
func RawStatusProto(s *Status) *spb.Status {
if s == nil {
return nil
}
return s.s
}

View File

@ -545,7 +545,7 @@ func (t *http2Client) createHeaderFields(ctx context.Context, callHdr *CallHdr)
Method: callHdr.Method, Method: callHdr.Method,
AuthInfo: t.authInfo, AuthInfo: t.authInfo,
} }
ctxWithRequestInfo := icredentials.NewRequestInfoContext(ctx, ri) ctxWithRequestInfo := credentials.NewContextWithRequestInfo(ctx, ri)
authData, err := t.getTrAuthData(ctxWithRequestInfo, aud) authData, err := t.getTrAuthData(ctxWithRequestInfo, aud)
if err != nil { if err != nil {
return nil, err return nil, err
@ -592,6 +592,9 @@ func (t *http2Client) createHeaderFields(ctx context.Context, callHdr *CallHdr)
// Send out timeout regardless its value. The server can detect timeout context by itself. // Send out timeout regardless its value. The server can detect timeout context by itself.
// TODO(mmukhi): Perhaps this field should be updated when actually writing out to the wire. // TODO(mmukhi): Perhaps this field should be updated when actually writing out to the wire.
timeout := time.Until(dl) timeout := time.Until(dl)
if timeout <= 0 {
return nil, status.Error(codes.DeadlineExceeded, context.DeadlineExceeded.Error())
}
headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-timeout", Value: grpcutil.EncodeDuration(timeout)}) headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-timeout", Value: grpcutil.EncodeDuration(timeout)})
} }
for k, v := range authData { for k, v := range authData {
@ -749,6 +752,25 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*ClientS
callHdr = &newCallHdr callHdr = &newCallHdr
} }
// The authority specified via the `CallAuthority` CallOption takes the
// highest precedence when determining the `:authority` header. It overrides
// any value present in the Host field of CallHdr. Before applying this
// override, the authority string is validated. If the credentials do not
// implement the AuthorityValidator interface, or if validation fails, the
// RPC is failed with a status code of `UNAVAILABLE`.
if callHdr.Authority != "" {
auth, ok := t.authInfo.(credentials.AuthorityValidator)
if !ok {
return nil, &NewStreamError{Err: status.Errorf(codes.Unavailable, "credentials type %q does not implement the AuthorityValidator interface, but authority override specified with CallAuthority call option", t.authInfo.AuthType())}
}
if err := auth.ValidateAuthority(callHdr.Authority); err != nil {
return nil, &NewStreamError{Err: status.Errorf(codes.Unavailable, "failed to validate authority %q : %v", callHdr.Authority, err)}
}
newCallHdr := *callHdr
newCallHdr.Host = callHdr.Authority
callHdr = &newCallHdr
}
headerFields, err := t.createHeaderFields(ctx, callHdr) headerFields, err := t.createHeaderFields(ctx, callHdr)
if err != nil { if err != nil {
return nil, &NewStreamError{Err: err, AllowTransparentRetry: false} return nil, &NewStreamError{Err: err, AllowTransparentRetry: false}

View File

@ -39,6 +39,7 @@ import (
"google.golang.org/grpc/internal/grpclog" "google.golang.org/grpc/internal/grpclog"
"google.golang.org/grpc/internal/grpcutil" "google.golang.org/grpc/internal/grpcutil"
"google.golang.org/grpc/internal/pretty" "google.golang.org/grpc/internal/pretty"
istatus "google.golang.org/grpc/internal/status"
"google.golang.org/grpc/internal/syscall" "google.golang.org/grpc/internal/syscall"
"google.golang.org/grpc/mem" "google.golang.org/grpc/mem"
"google.golang.org/protobuf/proto" "google.golang.org/protobuf/proto"
@ -1055,7 +1056,7 @@ func (t *http2Server) writeHeaderLocked(s *ServerStream) error {
return nil return nil
} }
// WriteStatus sends stream status to the client and terminates the stream. // writeStatus sends stream status to the client and terminates the stream.
// There is no further I/O operations being able to perform on this stream. // There is no further I/O operations being able to perform on this stream.
// TODO(zhaoq): Now it indicates the end of entire stream. Revisit if early // TODO(zhaoq): Now it indicates the end of entire stream. Revisit if early
// OK is adopted. // OK is adopted.
@ -1083,7 +1084,7 @@ func (t *http2Server) writeStatus(s *ServerStream, st *status.Status) error {
headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-status", Value: strconv.Itoa(int(st.Code()))}) headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-status", Value: strconv.Itoa(int(st.Code()))})
headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())}) headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())})
if p := st.Proto(); p != nil && len(p.Details) > 0 { if p := istatus.RawStatusProto(st); len(p.GetDetails()) > 0 {
// Do not use the user's grpc-status-details-bin (if present) if we are // Do not use the user's grpc-status-details-bin (if present) if we are
// even attempting to set our own. // even attempting to set our own.
delete(s.trailer, grpcStatusDetailsBinHeader) delete(s.trailer, grpcStatusDetailsBinHeader)

View File

@ -196,11 +196,14 @@ func decodeTimeout(s string) (time.Duration, error) {
if !ok { if !ok {
return 0, fmt.Errorf("transport: timeout unit is not recognized: %q", s) return 0, fmt.Errorf("transport: timeout unit is not recognized: %q", s)
} }
t, err := strconv.ParseInt(s[:size-1], 10, 64) t, err := strconv.ParseUint(s[:size-1], 10, 64)
if err != nil { if err != nil {
return 0, err return 0, err
} }
const maxHours = math.MaxInt64 / int64(time.Hour) if t == 0 {
return 0, fmt.Errorf("transport: timeout must be positive: %q", s)
}
const maxHours = math.MaxInt64 / uint64(time.Hour)
if d == time.Hour && t > maxHours { if d == time.Hour && t > maxHours {
// This timeout would overflow math.MaxInt64; clamp it. // This timeout would overflow math.MaxInt64; clamp it.
return time.Duration(math.MaxInt64), nil return time.Duration(math.MaxInt64), nil

View File

@ -540,6 +540,11 @@ type CallHdr struct {
PreviousAttempts int // value of grpc-previous-rpc-attempts header to set PreviousAttempts int // value of grpc-previous-rpc-attempts header to set
DoneFunc func() // called when the stream is finished DoneFunc func() // called when the stream is finished
// Authority is used to explicitly override the `:authority` header. If set,
// this value takes precedence over the Host field and will be used as the
// value for the `:authority` header.
Authority string
} }
// ClientTransport is the common interface for all gRPC client-side transport // ClientTransport is the common interface for all gRPC client-side transport

View File

@ -160,6 +160,7 @@ type callInfo struct {
codec baseCodec codec baseCodec
maxRetryRPCBufferSize int maxRetryRPCBufferSize int
onFinish []func(err error) onFinish []func(err error)
authority string
} }
func defaultCallInfo() *callInfo { func defaultCallInfo() *callInfo {
@ -365,6 +366,36 @@ func (o MaxRecvMsgSizeCallOption) before(c *callInfo) error {
} }
func (o MaxRecvMsgSizeCallOption) after(*callInfo, *csAttempt) {} func (o MaxRecvMsgSizeCallOption) after(*callInfo, *csAttempt) {}
// CallAuthority returns a CallOption that sets the HTTP/2 :authority header of
// an RPC to the specified value. When using CallAuthority, the credentials in
// use must implement the AuthorityValidator interface.
//
// # Experimental
//
// Notice: This API is EXPERIMENTAL and may be changed or removed in a later
// release.
func CallAuthority(authority string) CallOption {
return AuthorityOverrideCallOption{Authority: authority}
}
// AuthorityOverrideCallOption is a CallOption that indicates the HTTP/2
// :authority header value to use for the call.
//
// # Experimental
//
// Notice: This type is EXPERIMENTAL and may be changed or removed in a later
// release.
type AuthorityOverrideCallOption struct {
Authority string
}
func (o AuthorityOverrideCallOption) before(c *callInfo) error {
c.authority = o.Authority
return nil
}
func (o AuthorityOverrideCallOption) after(*callInfo, *csAttempt) {}
// MaxCallSendMsgSize returns a CallOption which sets the maximum message size // MaxCallSendMsgSize returns a CallOption which sets the maximum message size
// in bytes the client can send. If this is not set, gRPC uses the default // in bytes the client can send. If this is not set, gRPC uses the default
// `math.MaxInt32`. // `math.MaxInt32`.

View File

@ -38,6 +38,15 @@ type RPCTagInfo struct {
// FailFast indicates if this RPC is failfast. // FailFast indicates if this RPC is failfast.
// This field is only valid on client side, it's always false on server side. // This field is only valid on client side, it's always false on server side.
FailFast bool FailFast bool
// NameResolutionDelay indicates if the RPC needed to wait for the
// initial name resolver update before it could begin. This should only
// happen if the channel is IDLE when the RPC is started. Note that
// all retry or hedging attempts for an RPC that experienced a delay
// will have it set.
//
// This field is only valid on the client side; it is always false on
// the server side.
NameResolutionDelay bool
} }
// Handler defines the interface for the related stats handling (e.g., RPCs, connections). // Handler defines the interface for the related stats handling (e.g., RPCs, connections).

View File

@ -101,9 +101,9 @@ type ClientStream interface {
// It must only be called after stream.CloseAndRecv has returned, or // It must only be called after stream.CloseAndRecv has returned, or
// stream.Recv has returned a non-nil error (including io.EOF). // stream.Recv has returned a non-nil error (including io.EOF).
Trailer() metadata.MD Trailer() metadata.MD
// CloseSend closes the send direction of the stream. It closes the stream // CloseSend closes the send direction of the stream. This method always
// when non-nil error is met. It is also not safe to call CloseSend // returns a nil error. The status of the stream may be discovered using
// concurrently with SendMsg. // RecvMsg. It is also not safe to call CloseSend concurrently with SendMsg.
CloseSend() error CloseSend() error
// Context returns the context for this stream. // Context returns the context for this stream.
// //
@ -212,14 +212,15 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
} }
// Provide an opportunity for the first RPC to see the first service config // Provide an opportunity for the first RPC to see the first service config
// provided by the resolver. // provided by the resolver.
if err := cc.waitForResolvedAddrs(ctx); err != nil { nameResolutionDelayed, err := cc.waitForResolvedAddrs(ctx)
if err != nil {
return nil, err return nil, err
} }
var mc serviceconfig.MethodConfig var mc serviceconfig.MethodConfig
var onCommit func() var onCommit func()
newStream := func(ctx context.Context, done func()) (iresolver.ClientStream, error) { newStream := func(ctx context.Context, done func()) (iresolver.ClientStream, error) {
return newClientStreamWithParams(ctx, desc, cc, method, mc, onCommit, done, opts...) return newClientStreamWithParams(ctx, desc, cc, method, mc, onCommit, done, nameResolutionDelayed, opts...)
} }
rpcInfo := iresolver.RPCInfo{Context: ctx, Method: method} rpcInfo := iresolver.RPCInfo{Context: ctx, Method: method}
@ -257,7 +258,7 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
return newStream(ctx, func() {}) return newStream(ctx, func() {})
} }
func newClientStreamWithParams(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, mc serviceconfig.MethodConfig, onCommit, doneFunc func(), opts ...CallOption) (_ iresolver.ClientStream, err error) { func newClientStreamWithParams(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, mc serviceconfig.MethodConfig, onCommit, doneFunc func(), nameResolutionDelayed bool, opts ...CallOption) (_ iresolver.ClientStream, err error) {
callInfo := defaultCallInfo() callInfo := defaultCallInfo()
if mc.WaitForReady != nil { if mc.WaitForReady != nil {
callInfo.failFast = !*mc.WaitForReady callInfo.failFast = !*mc.WaitForReady
@ -296,6 +297,7 @@ func newClientStreamWithParams(ctx context.Context, desc *StreamDesc, cc *Client
Method: method, Method: method,
ContentSubtype: callInfo.contentSubtype, ContentSubtype: callInfo.contentSubtype,
DoneFunc: doneFunc, DoneFunc: doneFunc,
Authority: callInfo.authority,
} }
// Set our outgoing compression according to the UseCompressor CallOption, if // Set our outgoing compression according to the UseCompressor CallOption, if
@ -321,19 +323,20 @@ func newClientStreamWithParams(ctx context.Context, desc *StreamDesc, cc *Client
} }
cs := &clientStream{ cs := &clientStream{
callHdr: callHdr, callHdr: callHdr,
ctx: ctx, ctx: ctx,
methodConfig: &mc, methodConfig: &mc,
opts: opts, opts: opts,
callInfo: callInfo, callInfo: callInfo,
cc: cc, cc: cc,
desc: desc, desc: desc,
codec: callInfo.codec, codec: callInfo.codec,
compressorV0: compressorV0, compressorV0: compressorV0,
compressorV1: compressorV1, compressorV1: compressorV1,
cancel: cancel, cancel: cancel,
firstAttempt: true, firstAttempt: true,
onCommit: onCommit, onCommit: onCommit,
nameResolutionDelay: nameResolutionDelayed,
} }
if !cc.dopts.disableRetry { if !cc.dopts.disableRetry {
cs.retryThrottler = cc.retryThrottler.Load().(*retryThrottler) cs.retryThrottler = cc.retryThrottler.Load().(*retryThrottler)
@ -417,7 +420,7 @@ func (cs *clientStream) newAttemptLocked(isTransparent bool) (*csAttempt, error)
var beginTime time.Time var beginTime time.Time
shs := cs.cc.dopts.copts.StatsHandlers shs := cs.cc.dopts.copts.StatsHandlers
for _, sh := range shs { for _, sh := range shs {
ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: cs.callInfo.failFast}) ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: cs.callInfo.failFast, NameResolutionDelay: cs.nameResolutionDelay})
beginTime = time.Now() beginTime = time.Now()
begin := &stats.Begin{ begin := &stats.Begin{
Client: true, Client: true,
@ -573,6 +576,9 @@ type clientStream struct {
onCommit func() onCommit func()
replayBuffer []replayOp // operations to replay on retry replayBuffer []replayOp // operations to replay on retry
replayBufferSize int // current size of replayBuffer replayBufferSize int // current size of replayBuffer
// nameResolutionDelay indicates if there was a delay in the name resolution.
// This field is only valid on client side, it's always false on server side.
nameResolutionDelay bool
} }
type replayOp struct { type replayOp struct {
@ -987,7 +993,7 @@ func (cs *clientStream) RecvMsg(m any) error {
func (cs *clientStream) CloseSend() error { func (cs *clientStream) CloseSend() error {
if cs.sentLast { if cs.sentLast {
// TODO: return an error and finish the stream instead, due to API misuse? // Return a nil error on repeated calls to this method.
return nil return nil
} }
cs.sentLast = true cs.sentLast = true
@ -1008,7 +1014,10 @@ func (cs *clientStream) CloseSend() error {
binlog.Log(cs.ctx, chc) binlog.Log(cs.ctx, chc)
} }
} }
// We never returned an error here for reasons. // We don't return an error here as we expect users to read all messages
// from the stream and get the RPC status from RecvMsg(). Note that
// SendMsg() must return an error when one occurs so the application
// knows to stop sending messages, but that does not apply here.
return nil return nil
} }
@ -1372,7 +1381,7 @@ func (as *addrConnStream) Trailer() metadata.MD {
func (as *addrConnStream) CloseSend() error { func (as *addrConnStream) CloseSend() error {
if as.sentLast { if as.sentLast {
// TODO: return an error and finish the stream instead, due to API misuse? // Return a nil error on repeated calls to this method.
return nil return nil
} }
as.sentLast = true as.sentLast = true

View File

@ -19,4 +19,4 @@
package grpc package grpc
// Version is the current grpc version. // Version is the current grpc version.
const Version = "1.72.2" const Version = "1.73.0"

37
vendor/modules.txt vendored
View File

@ -312,7 +312,7 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/inte
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil
# go.opentelemetry.io/otel v1.34.0 # go.opentelemetry.io/otel v1.35.0
## explicit; go 1.22.0 ## explicit; go 1.22.0
go.opentelemetry.io/otel go.opentelemetry.io/otel
go.opentelemetry.io/otel/attribute go.opentelemetry.io/otel/attribute
@ -340,12 +340,12 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry
# go.opentelemetry.io/otel/metric v1.34.0 # go.opentelemetry.io/otel/metric v1.35.0
## explicit; go 1.22.0 ## explicit; go 1.22.0
go.opentelemetry.io/otel/metric go.opentelemetry.io/otel/metric
go.opentelemetry.io/otel/metric/embedded go.opentelemetry.io/otel/metric/embedded
go.opentelemetry.io/otel/metric/noop go.opentelemetry.io/otel/metric/noop
# go.opentelemetry.io/otel/sdk v1.34.0 # go.opentelemetry.io/otel/sdk v1.35.0
## explicit; go 1.22.0 ## explicit; go 1.22.0
go.opentelemetry.io/otel/sdk go.opentelemetry.io/otel/sdk
go.opentelemetry.io/otel/sdk/instrumentation go.opentelemetry.io/otel/sdk/instrumentation
@ -353,10 +353,11 @@ go.opentelemetry.io/otel/sdk/internal/env
go.opentelemetry.io/otel/sdk/internal/x go.opentelemetry.io/otel/sdk/internal/x
go.opentelemetry.io/otel/sdk/resource go.opentelemetry.io/otel/sdk/resource
go.opentelemetry.io/otel/sdk/trace go.opentelemetry.io/otel/sdk/trace
# go.opentelemetry.io/otel/trace v1.34.0 # go.opentelemetry.io/otel/trace v1.35.0
## explicit; go 1.22.0 ## explicit; go 1.22.0
go.opentelemetry.io/otel/trace go.opentelemetry.io/otel/trace
go.opentelemetry.io/otel/trace/embedded go.opentelemetry.io/otel/trace/embedded
go.opentelemetry.io/otel/trace/internal/telemetry
go.opentelemetry.io/otel/trace/noop go.opentelemetry.io/otel/trace/noop
# go.opentelemetry.io/proto/otlp v1.3.1 # go.opentelemetry.io/proto/otlp v1.3.1
## explicit; go 1.17 ## explicit; go 1.17
@ -389,7 +390,7 @@ go.uber.org/zap/internal/pool
go.uber.org/zap/internal/stacktrace go.uber.org/zap/internal/stacktrace
go.uber.org/zap/zapcore go.uber.org/zap/zapcore
go.uber.org/zap/zapgrpc go.uber.org/zap/zapgrpc
# golang.org/x/crypto v0.38.0 # golang.org/x/crypto v0.39.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/crypto/blowfish golang.org/x/crypto/blowfish
golang.org/x/crypto/chacha20 golang.org/x/crypto/chacha20
@ -407,10 +408,10 @@ golang.org/x/crypto/ssh/internal/bcrypt_pbkdf
## explicit; go 1.20 ## explicit; go 1.20
golang.org/x/exp/constraints golang.org/x/exp/constraints
golang.org/x/exp/slices golang.org/x/exp/slices
# golang.org/x/mod v0.24.0 # golang.org/x/mod v0.25.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/mod/sumdb/dirhash golang.org/x/mod/sumdb/dirhash
# golang.org/x/net v0.40.0 # golang.org/x/net v0.41.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/net/context golang.org/x/net/context
golang.org/x/net/html golang.org/x/net/html
@ -426,11 +427,11 @@ golang.org/x/net/internal/timeseries
golang.org/x/net/proxy golang.org/x/net/proxy
golang.org/x/net/trace golang.org/x/net/trace
golang.org/x/net/websocket golang.org/x/net/websocket
# golang.org/x/oauth2 v0.26.0 # golang.org/x/oauth2 v0.28.0
## explicit; go 1.18 ## explicit; go 1.23.0
golang.org/x/oauth2 golang.org/x/oauth2
golang.org/x/oauth2/internal golang.org/x/oauth2/internal
# golang.org/x/sync v0.14.0 # golang.org/x/sync v0.15.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/sync/singleflight golang.org/x/sync/singleflight
# golang.org/x/sys v0.33.0 # golang.org/x/sys v0.33.0
@ -443,7 +444,7 @@ golang.org/x/sys/windows/registry
# golang.org/x/term v0.32.0 # golang.org/x/term v0.32.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/term golang.org/x/term
# golang.org/x/text v0.25.0 # golang.org/x/text v0.26.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/text/encoding golang.org/x/text/encoding
golang.org/x/text/encoding/charmap golang.org/x/text/encoding/charmap
@ -477,25 +478,25 @@ golang.org/x/text/width
# golang.org/x/time v0.3.0 # golang.org/x/time v0.3.0
## explicit ## explicit
golang.org/x/time/rate golang.org/x/time/rate
# golang.org/x/tools v0.31.0 # golang.org/x/tools v0.33.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/tools/cover golang.org/x/tools/cover
golang.org/x/tools/go/ast/inspector golang.org/x/tools/go/ast/inspector
golang.org/x/tools/internal/astutil/edge golang.org/x/tools/internal/astutil/edge
# google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de # google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de
## explicit; go 1.19 ## explicit; go 1.19
# google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a # google.golang.org/genproto/googleapis/api v0.0.0-20250324211829-b45e905df463
## explicit; go 1.22 ## explicit; go 1.23.0
google.golang.org/genproto/googleapis/api google.golang.org/genproto/googleapis/api
google.golang.org/genproto/googleapis/api/annotations google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/api/expr/v1alpha1 google.golang.org/genproto/googleapis/api/expr/v1alpha1
google.golang.org/genproto/googleapis/api/httpbody google.golang.org/genproto/googleapis/api/httpbody
# google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a # google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463
## explicit; go 1.22 ## explicit; go 1.23.0
google.golang.org/genproto/googleapis/rpc/errdetails google.golang.org/genproto/googleapis/rpc/errdetails
google.golang.org/genproto/googleapis/rpc/status google.golang.org/genproto/googleapis/rpc/status
# google.golang.org/grpc v1.72.2 # google.golang.org/grpc v1.73.0
## explicit; go 1.23 ## explicit; go 1.23.0
google.golang.org/grpc google.golang.org/grpc
google.golang.org/grpc/attributes google.golang.org/grpc/attributes
google.golang.org/grpc/backoff google.golang.org/grpc/backoff