Traversing a proxy can degrade performance
That is one of the issues. The author, prior to that comment, points out our apps with the following characteristics, too: (1) “While HTTPS-based, could not be easily configured to present a machine certificate to a proxy”, (2) “Required IP-layer connectivity to a variety of backends using non-HTTPS protocols”, (3) “Explicitly required IP-based client allowlists to function.” Changing the location of the proxy does not change those characteristics.
That’s inherent to any hosted solution
Not if you build P2P connections or across a smart routing fabric. The P2P connections (e.g., Wireguard) will have the same latency as if you just accessed the internet. A smart routing fabric can reduce E2E latency (e.g., how OpenZiti works or something like Cloudflare Argo Smart Routing). It’s not about breaking the laws of physics; it’s about utilising the advantages of the internet and not having a single point of failure/backhaul.
The solution? Self-hosting your proxy or access control solution
The proxy needs to be hosted somewhere. Maybe self-hosting performs better than a hosted SaaS as I can self-host it close to my applications. But if you are deploying at an enterprise scale, you have likely distributed users across the country/globe and probably distributed apps. So now you need distributed proxies, and bam, you’re building your own Cloudflare/Zscaler. Can you compete with their PoPs and Tier1 peering agreements, maybe, maybe not. The point is that self-hosting is not a silver bullet. Besides, you can self-host a VPN or zero trust overlay, so your answer also applies to those other technologies.
does this solution require my data to traverse their servers? Because if yes, they’re going to degrade your performance whether they’re a VPN or proxy.
Building on the above points, this is only true if they have few servers. If you can deploy many and use smart routing, then you may actually be able to increase performance. It’s basically the principles of MPLS, but across the internet.
As an added benefit, self-hosting removes the MitM attack angle. Zero trust architecture naturally asks: why are you allowing your data to be passed through infrastructure you do not own?
Any product/technology/vendor which needs or can sell you information on your data passing through their stack is not zero trust IMHO. The better solutions use E2EE so that packets are encrypted from source to destination regardless if they go through hops. If you want it to be more secure, use your own keys for that E2EE so that it’s literally impossible for the vendor hosting the overlay to decrypt/see your packets as they traverse their infra. Dont like that, go further and host the data plane yourself.
Therefore, you want a self-hosted reverse proxy to achieve all of the above.
I believe we debunked bullets 2 (best performance) and 3 (MITM). If you want traffic inspection, then you can combine a proxy with an overlay network. But do you need inspection? A ZTN overlay network allows you to close all inbound ports, completely stopping external network attacks. If you embed ZTN in your apps, then you have no listening ports on the underlay network, your application is literally unattackable via conventional IP-based tooling, and all conventional network threats are immediately useless. Both of those are at least orders of magnitude reduction of attack surface. If your use case still requires inspection, then enable that.