Hi! A little late to the game here, but we believe _everything_ should go through the VPN, and not be a bottleneck in any way. This is why we founded our company. Inspiration was pushing everything through the solution at a 1000-person company, so happy to share some war stories if its useful.
That would be great. How do I tell it that anything going to our k8s clusters should go through the vpn. I read about URL listing, I don’t think that will work since things like a bastion don’t have a url.
The “moderm model of working” is 1/2 truth and half marketing fiction propagated by Cloud service providers and their ilk. VPN will always be more secure, but security is always balanced against usability.
To clarify… I mean when the client is using the vpn. They can always turn it off. They just won’t be able to connect to critical production resources.
From what I have been seeing, a lot of security standards are looking for tight controls on resources like k8s clusters and aws accounts… And vpn’s can help ensure the public is quickly blocked from sensitive internal resources.
In our case, IP whitelisting is a large part of our protections… but some people have IP’s that change multiple times per day (LTE at home). So they are having to get their IP updated a ton.
Here’s some resources on alternatives. Basically you control their web access with a secure web gateway or something similar, so all their traffic is still monitored without having to route it through your network. For accessing internal resources, there’s things like ZTNA. I would reach out to a couple vendors like zscaler, Plato alto, etc and get a feel for what’s being done for companies with similar requirements.
All great questions. The VPN would be in the cloud (our current one is not, and doesn’t work most of the time anyway). So I think bandwidth will be “less” of an issue, but I am still worried about it. The other requirements are still work in progress. The security guy is working through various levels or things. Right now, there is no requirement to even use the VPN at all. But we do use IP whitelisting managed via a git repo that requires review to merge changes to. However some users have IP’s that change more then daily. So the minimum is to enable those users to access resources without constantly having to merge IP changes. After that we know we need to improve our internal security measures… kinda like the ZTNA that people talk about. We need to move in that direction, but it will be a long road due to limited resources.
Usually you would configure the client to know what IP ranges is should send down that tunnel. Presumably if your k8 clusters are in-house you would just tell the client to tunnel for your whole internal IP range.
HOW you tell the client which ranges to tunnel will depend on the client and how you’re managing it. Often static routes are something you push from the concentrator in the office when the client connects.
There is nothing “more secure” about an MFA protected web proxy being replaced by a VPN appliance which, if you look at the last few years, have all had a history of RCE vulnerabilities. Compromised Fortinets in particular have recurrently been associated with ransomware.
In the past several years, SSLVPN appliances/concentrators have become lightning rods for repeated attacks due to trivial vulnerabilities organizations don’t patch and misconfiguration (usually lack of network segmentation) allowing the bad guys unfettered access to internal resources. Fortinet, Pulse, F5, Cisco, and many others have been affected by these sorts of exploits.
In other words, VPNs have given way too many orgs a false sense of security because they act like VPN is secure on its own.
The other thing is that even if the VPN isn’t vulnerable, if orgs do a bad job of test restoring their backups or if the office internet/hardware is down then they’re going to have a problem with availability they wouldn’t have with cloud services.
In 2023 I’d be much more inclined to trust the availability of Microsoft 365 services than a typical company file server.
Seriously there are better ways to protect this sort of thing that “IP Whitelisting”. This gets back to what I mean about modern workplaces. Using AWS - literally “the cloud” - and saying it can only be accessed via your office IP is a poor workflow.
Deploy Yubikeys on your accounts and forget about IP addressing.
You reply and u/disclosure5 kinda opened my eyes to zero trust. We’re in the midst of finally doing subnet segmentation and I’ve been struggling a bit with structure. For whatever reason, your posts gave me some insight.
For the secure web gateway… that would have to be something on the laptops I assume? Otherwise it would be like a vpn.
But the ZTNA stuff sounds like everything you expose has to be individually hardened. We are pretty small. We don’t have the resources for that. And we have things like Prometheus dashboards that at best will do basic auth, but with the community helm chart we used for it, can’t even turn that on. I looked at zscaler, that sounds a lot like a VPN in that all traffic has to go through them. I mean it certainly can do a lot more than a VPN. And that does sound great, but it sounds like it would require all the same work as split tunneling would to separate out the traffic at the client end… I am a bit out of my depth on this stuff, so maybe I am missing something.