Skip to content

acquire with reverse proxy causes Server error: Peer does not have a valid session #1706

@brherger

Description

@brherger

Hello, thanks for a great tool!

I am trying to host the labgrid-coordinator behind an nginx reverse proxy. I am using labgrid/coordinator:v25.0.1 and nginx:alpine:latest images with docker compose.

For simple requests from labgrid-client everything works fine, but when I try to acquire a place the following error occurs:
Server error: Peer ipv4:172.22.0.6:42766 does not have a valid session.

If I bypass the reverse proxy and send the request straight to the coordinator container it works as expected.

After much debugging and research I believe that nginx must be opening a new TCP connection after the startup/sync calls complete for the AcquirePlace stream.

I added some extra logging to confirm:

# coordinator.py @ v25.0.1 tag

async def AcquirePlace(self, request, context):
        logging.info("AcquirePlace")
        peer = context.peer()
        name = request.placename
        logging.info(f"{peer=}")
        logging.info(f"{self.clients=}")
        try:
            username = self.clients[peer].name
        except KeyError:
            await context.abort(grpc.StatusCode.FAILED_PRECONDITION, f"Peer {peer} does not have a valid session")

Note the subtle port change which results in the invalid session.

board_farm_coordinator  | INFO:root:client connected: ipv4:172.22.0.6:34474
board_farm_coordinator  | INFO:root:still client connected: ipv4:172.22.0.6:34474
board_farm_coordinator  | DEBUG:root:client in_msg startup {
board_farm_coordinator  |   version: "25.0.1"
board_farm_coordinator  |   name: "host/vagrant"
board_farm_coordinator  | }
...
<subscription and sync traffic>
...
board_farm_coordinator  | INFO:root:AcquirePlace
board_farm_coordinator  | INFO:root:peer='ipv4:172.22.0.6:34476'
board_farm_coordinator  | INFO:root:self.clients={'ipv4:172.22.0.6:34474': ClientSession(coordinator=<labgrid.remote.coordinator.Coordinator object at 0x76e815771a90>, peer='ipv4:172.22.0.6:34474', name='host/vagrant', queue=<Queue at 0x76e8157aa010 maxsize=0 _getters[1]>, version='25.0.1')}

I am curious if anyone has gotten this working and has the magic nginx config to allow the upstream to persist.

I also wonder if using this method of port matching to later identify a user is an ideal assumption given the messages are defined as unary across different streams by design. Perhaps passing the user name right into AcquirePlaceRequest would be more robust?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions