I took a look at a Forge Token send in Authorization headers to the forge remote.
These tokens are gigantic, and a lot of it is duplicated information in inefficient forms.
The headers is ariynd 2.3KBytes of information send on each request. If you inpect it you see things encoded like this:
This information is:
- Encoded in a in-efficient way
- Duplicate information. Its is basically the cloudId with some prefixes. The cloudID is already in it anyway.
For example, if I throw this context madness out of the header, it shrinks down to 1.5k.
I certainly could reduce it further by removing some ‘always the same’ prefixes etc. However, these then certainly need deeper insights why they are there.
Anyway, I think it isn’t a good sigh to have such unnecessary efficiency. This header is way bigger than many actual payloads. It adds costs on every hop:
- Every network hop needs to transfer more data, adding latency. Especially if there is a TCP slow start somewhere in between. (Ex. no HTTP 2.0 somewhere in the pipeline, or it hits some ‘servless’ compute where a new connection needs to be established)
- If the header needs to process, its more data to verify, decode, interpret, have in memory etc.
Anyway: In general this might seem ‘fine’, but this inefficiencies do add up.
From a customer point of view: Atlassian products are not known for their snappiness =). Again, to many small inefficiencies to start to add up until it is noticeable by customers.