CVE-2026-29781 is a low severity vulnerability with a CVSS score of 0.0. Active exploits exist with no official patch available - immediate mitigation is required.
Very low probability of exploitation
EPSS predicts the probability of exploitation in the next 30 days based on real-world threat data, complementing CVSS severity scores with actual risk assessment.
A vulnerability exists in the Sliver C2 server's Protobuf unmarshalling logic due to a systemic lack of nil-pointer validation. By extracting valid implant credentials and omitting nested fields in a signed message, an authenticated actor can trigger an unhandled runtime panic. Because the mTLS, WireGuard, and DNS transport layers lack the panic recovery middleware present in the HTTP transport, this results in a global process termination. While requiring post-authentication access (a captured implant), this flaw effectively acts as an infrastructure "kill-switch," instantly severing all active sessions across the entire fleet and requiring a manual server restart to restore operations.
Sliver encapsulates all C2 traffic in a generic sliverpb.Envelope, which acts as a routing wrapper. When the server receives an Envelope with Type = 53 (MsgBeaconRegister), the internal router strips the envelope and passes the raw Data bytes directly to the vulnerable handlers.beaconRegisterHandler(implantConn, data). This flow is consistent across all transports, but the error handling of the transport itself determines the final impact.
server/handlers/beacons.goThe core of the vulnerability lies in the architectural handling of Protobuf messages within the Go runtime. In proto3, all fields are optional by design. When a message contains a nested sub-message (like Register inside BeaconRegister), the Go Protobuf implementation represents this sub-message as a .
Please cite this page when referencing data from Strobes VI. Proper attribution helps support our vulnerability intelligence research.
In server/handlers/beacons.go, the server unmarshals the incoming data without subsequent validation of its nested structures:
func beaconRegisterHandler(implantConn *core.ImplantConnection, data []byte) *sliverpb.Envelope {
// ...
beaconReg := &sliverpb.BeaconRegister{}
err := proto.Unmarshal(data, beaconReg)
// Successful even if 'Register' sub-message is omitted
// VULNERABILITY: beaconReg.Register is nil if omitted by sender.
// Accessing any property of a nil pointer triggers an immediate runtime panic.
beaconRegUUID, _ := uuid.FromString(beaconReg.Register.Uuid)
// ...
}
If an attacker constructs a BeaconRegister message and deliberately omits the Register field, proto.Unmarshal parses the stream without error but leaves the Register pointer as nil. The subsequent attempt to access beaconReg.Register.Uuid triggers a Nil-Pointer Dereference.
Beyond the beacon registration, the investigation revealed a systemic pattern of missing nil-checks across various handlers. These vulnerabilities follow the same root cause: immediate dereferencing of nested Protobuf fields post-unmarshalling.
These handlers process data from implants. If an implant binary is captured, these can be triggered to crash the server:
server/handlers/sessions.go): The createReverseTunnelHandler panics when req.Rportfwd is omitted.server/handlers/sessions.go): The socksDataHandler fails when the SocksData sub-message is absent.server/handlers/pivot.go): Functions serverKeyExchange and peersToString dereference peerEnvelope.Peers without checking if the peer list is empty or nil.The Sliver RPC server (server/rpc/) is also susceptible. While these require an authenticated operator, they represent a significant stability risk where a malformed request from a custom client or automated script can bring down the entire C2 infrastructure.
| Function | File | Vulnerable Pattern |
| ------------------- | ------------------------------- | ------------------------------------------------------- |
| getTimeout | server/rpc/rpc.go | req.GetRequest().Timeout |
| getError | server/rpc/rpc.go | resp.GetResponse().Err |
| Portfwd | server/rpc/rpc-portfwd.go | req.Request.SessionID |
| GetSystem | server/rpc/rpc-priv.go | req.GetRequest().SessionID |
| GetPrivileges | server/rpc/rpc-priv.go | req.Request.SessionID |
| NetConnPivot | server/rpc/rpc-pivot.go | req.Request.SessionID |
| PivotListeners | server/rpc/rpc-pivot.go | req.Request.SessionID |
| SocksStart | server/rpc/rpc-socks.go | req.Request.SessionID |
| SocksStop | server/rpc/rpc-socks.go | req.Request.SessionID |
| RPortfwd | server/rpc/rpc-rportfwd.go | req.Request.SessionID |
| Shell | server/rpc/rpc-shell.go | req.Request.SessionID |
| ShellResize | server/rpc/rpc-shell.go | req.Request.SessionID |
| BackdoorImplant | server/rpc/rpc-backdoor.go | req.Request.SessionID, req.Request.Timeout |
| CrackstationTrigger | server/rpc/rpc-crackstations.go | statusUpdate.HostUUID (after unmarshal of req.Data) |
| Tasks | server/rpc/rpc-tasks.go | req.Request.SessionID |
| ImplantReconfig | server/rpc/rpc-reconfig.go | req.Request.SessionID |
| MsfInject | server/rpc/rpc-msf.go | req.Request.SessionID |
| Hijack | server/rpc/rpc-hijack.go | req.Request.SessionID |
The exploit requires valid implant credentials, which are inherently embedded in Sliver's generated binaries. Since these binaries are often deployed to untrusted or compromised environments, credential recovery is a high-probability event. During testing, it was confirmed that an attacker can obtain the required mTLS certificates and Age Secret Keys through:
strings utility on the implant binary or dumping the embedded configuration block is sufficient to recover the private keys.The provided exploit mtls_poc.go or mtls_poc.go demonstrates how a single captured implant can be weaponized into a "Kill Switch" for the entire C2 infrastructure. The attack follows these steps:
BeaconRegister Protobuf message where the ID is defined, but the critical Register sub-message is explicitly omitted (set to nil).Register pointer, leading to an immediate Full Server DoS.The impact of this panic varies significantly depending on the C2 transport used by the implant. While the nil-pointer dereference happens in the shared handler logic, the transport layer determines whether this results in a localized request failure or a total server termination.
HTTP-based beacons do not crash the entire Sliver server. This is because Sliver utilizes the standard Go net/http library.
Code Reference (server/c2/http.go):
server.HTTPServer = &http.Server{
Addr: fmt.Sprintf("%s:%d", req.Host, req.Port),
Handler: server.router(),
// ...
}
// ...
go server.HTTPServer.ListenAndServe()
By design, net/http's ServeHTTP implementation wraps every connection in a defer recover() block. When the beaconRegisterHandler panics, the standard library catches it, logs the trace, and simply closes that specific TCP connection. The rest of the server remains unaffected.
Both mTLS and WireGuard utilize the yamux multiplexer to handle multiple streams over a single connection. Unlike the HTTP server, Sliver manually manages these goroutines without a global recovery mechanism.
mTLS server/c2/mtls.go:
if handler, ok := handlers[envelope.Type]; ok {
mtlsLog.Debugf("Received new mtls message type %d, data: %s", envelope.Type, envelope.Data)
go func(envelope *sliverpb.Envelope) {
respEnvelope := handler(implantConn, envelope.Data) // <--- PANIC HERE
if respEnvelope != nil {
implantConn.Send <- respEnvelope
}
}(envelope)
}
WireGuard server/c2/wireguard.go:
if handler, ok := handlers[envelope.Type]; ok {
go func(envelope *sliverpb.Envelope) {
respEnvelope := handler(implantConn, envelope.Data) // <--- PANIC HERE
// ...
}(envelope)
}
Because these handlers are invoked in a raw goroutine without a recover() block, the panic propagates to the top of the stack, causing the entire Go runtime to exit (SIGSEGV). This kills the sliver-server process immediately.
Similar to mTLS, the DNS transport reassembles messages and then forwards them to handlers in unsynchronized goroutines.
DNS server/c2/dns.go:
// Line 833: Forwarding the completed envelope
go dnsSession.ForwardCompletedEnvelope(msg.ID, pending)
// ...
// Inside ForwardCompletedEnvelope:
if handler, ok := handlers[envelope.Type]; ok {
respEnvelope := handler(s.ImplantConn, envelope.Data) // <--- PANIC HERE
// ...
}
This asynchronous call also lacks a recover() block, making DNS sessions equally capable of crashing the entire server.
| Protocol | Uses recover()? | Impact of Panic | Server Crash? |
| :--- | :---: | :--- | :---: |
| HTTP / HTTPS | Yes (Built-in) | Request Terminated | No |
| mTLS | No | Process Termination | Yes |
| WireGuard | No | Process Termination | Yes |
| DNS | No | Process Termination | Yes |
The impact of this vulnerability is Total Operational Paralysis. Because the panic causes the entire Go runtime to terminate:
Addressing these vulnerabilities requires a systemic shift towards "fail-safe" architecture. The root cause is a combination of unprotected Protobuf pointer dereferences and a lack of isolation in asynchronous transport layers.
The immediate priority is to implement strict validation for all nested Protobuf fields. In Go, omitted sub-messages are nil after unmarshaling; handlers must assume any pointer-typed field from an implant is potentially nil.
Handlers should validate the entire message structure before proceeding to business logic.
beaconReg := &sliverpb.BeaconRegister{}
if err := proto.Unmarshal(data, beaconReg); err != nil {
return nil // Drop malformed wire data
}
// MANDATORY VALIDATION BLOCK
if beaconReg.Register == nil {
beaconHandlerLog.Errorf("Nil Register message from %s", core.GetRemoteAddr(implantConn))
return nil
}
// Deep access is now safe
id := beaconReg.Register.Uuid
// ...
To protect the gRPC/Operator interface, the server should deprecate direct access to the Request metadata field in favor of safe accessors that handle missing metadata gracefully.
// server/rpc/rpc.go
// getRequestSafe returns the Request metadata or an error, preventing panics
func getRequestSafe(req GenericRequest) (*commonpb.Request, error) {
r := req.GetRequest()
if r == nil {
return nil, status.Error(codes.InvalidArgument, "missing mandatory 'Request' metadata")
}
return r, nil
}
To achieve parity with the resilience of the HTTP transport, all multiplexed transports (mTLS, WireGuard, DNS) must implement a supervisor pattern using Go's recover() mechanism.
All handlers should be executed inside a "Safe Wrapper" that catches runtime panics, logs the failure, and terminates only the affected stream without crashing the entire C2 daemon.
func SafeInvoke(handler ServerHandler, conn *core.ImplantConnection, data []byte) {
defer func() {
if r := recover(); r != nil {
log.Errorf("RECOVERY: Intercepted panic in handler: %v\n%s", r, debug.Stack())
// The daemon continues running; only this specific action failed.
}
}()
response := handler(conn, data)
if response != nil {
conn.Send <- response
}
}
The framework should move away from manual nil-checking towards automated schema validation:
protoc-gen-validate (PGV): Annotate .proto files with (validate.rules).message.required = true and generate automatic validation code.By adopting this multi-tiered approach, Sliver evolves from a "fail-deadly" design to a robust, enterprise-grade C2 architecture.