Someone Explain The Point Of Proxy Servers In Modern Networks

Just doing some reading about the Cisco Web Security Appliance, and having worked on Bluecoats in the long past anyways, I actually don’t see the point of proxy-servers nowadays. If you have a decent firewall with next-gen features (IPS, Malware Detection, URL Filtering, Dynamic Feeds etc), then I don’t see any real benefit of a proxy. Sure you can save a few megabytes from caching, but people have big pipes nowdays, so nobody cares; but even if you did, you’d use a WAN optimisation box like a riverbed which is far cheaper. I’m only frustrated because I keep reading stuff about it, and I just constantly ask myself, well what’s it doing that a NGFW can’t do already. Is there something I’m missing here? Is it just a bit of offloading for very large networks maybe? I dont know.

I’m only frustrated because I keep reading stuff about it, and I just constantly ask myself, well what’s it doing that a NGFW can’t do already.

Technically that NGFW is a proxy, even though it is inline with the traffic. The proxy part is when the traffic is shunted to the software that does filtering or whatever. At some point, that hardware will be unable to handle the load of NGFW responsibilities plus the proxy; so you can split them apart.

Why still use a web proxy, from my experience with Sophos Web Appliances

  1. the reporting is far superior and granular.
  2. very quick and simple to spin up another VA and add to the cluster.
  3. great integration with Active Directory.
  4. User can request access to blocked sites from the block message on their own browser. We just get an email with the request.
  5. Links in with Sophos End Point protection so any browsing off site has the same restrictions enforced, and all activity is reported back.
  6. Very easy to create policies, and nested groups for increasing restrictions and security.

To be honest even with NGFW (experience of Checkpoint and Palo Alto) I’ve found proxy layer 7 filtering and restriction to be far superior. Which to be honest is not surprising as they are a dedicated Layer 7 device, whereas it is a function that has been added on to firewalls.

You’re not totally wrong. Proxies are more than just caches nowadays. We use our Bluecoats mostly for web filtering. That way we can block porn, file sharing services and other sites that management does not want people going to from work. They can also be used to do SSL decryption if companies want to go that route. Another thing they can do is to scan files for known viruses.

I’m actually looking to replace our BCs with a DNS filtering solution like Umbrella. The BCs are complex and we often run into issues where they block something, usually due to a certificate issue, and it’s difficult to figure out why.

The main reason we’re using a dedicated proxy rather than trying to add yet another task to our internet facing firewalls is capacity. Our proxy is currently a chassis with 22 load balanced blades. A lot of blades can go belly up before the user experience starts to deteriorate. Using a dedicated proxy also helps with the politics of the organization as well, networking and security teams can have more strongly established boundaries.

And as others have already said, it’s about more than just caching, though I disagree on the benefit there. You’re saving a lot more than a few megabytes when it’s tens or hundreds of thousands of users behind it. We’re using ours for URL filtering, content analysis, and limited access filtering for servers (as in, servers being allowed to access only a handful of predetermined URLs with all other outgoing traffic blocked).

And lastly, just because a NGFW can do it doesn’t mean it can do it as well as the dedicated platform. I haven’t looked at our firewall vendors proxy/web content filtering capability, but I have recommended against using their IPS capability because it’s nowhere near feature parity with our current IPS.

Defense in depth maybe?

I still see them a lot in “high secure” networks where the internal network doesn’t have a default gateway and also no external DNS resolving.

You put a Proxy1 in DMZ1, reachable from internal. static route for proxy 2.

Another proxy2 in DMZ2, with a gateway to the internet, and only connectivity to proxy 1 on the internal part.

That way clients on the inside can access (restricted) internet, but it gets very hard to connect to anything external, or for anything external to connect back.

For your generic enterprise networks? Good endpoint protection and something like Umbrella usally does the trick :slight_smile:

In an access setting, they provide centralised filtering, caching, and logging. They can be highly resilient too. And a lot of them do have firewall features built in so they are often the same device.

One reason to separate them off might be just a case of defining responsibility - if you have a network engineer and an internal system support engineer, you might want to be able to have internal support person be able to manage the web proxy but not go poking around with the firewall.

In a provider setting, you might use a proxy or reverse proxy to provide SSL support where the underlying application doesn’t have it, or maybe the software is old/untrustworthy or just doesn’t have the granular security you’re after. You can also use them to distribute where resources are served from and obviously to cache the most commonly accessed resources.

While the bandwidth accessible to individuals and businesses is certainly increasing, that doesn’t mean it is free or will definitely be uncontended. If you’re working for a small or medium business then you might not need to bother with a proxy. If you’re an ISP or you’re running some sort of large event then you might find a caching proxy to be very useful.

One point I would make about proxies is that if you use a “cloud proxy service” like zscaler then you can apply Web Filtering (+more) to a users machine whether they are sitting in the corporate network or on the McDonalds Wifi.

NGFW are great if your sitting in the office or on a VPN but in today’s modern ‘any device, any where’ world people want to move away from offices and even VPNs but still provide security to the end user devices.

Dns over https and newer tls stuff that makes MITM SSL inspection difficult/impossible will likely bring a resurgence in forward proxy deployments. Just my opinion but what else will we have to work with from an inspection standpoint? I am not looking forward to this scenario but you can’t let Linda in accounting surf the web with no protection.

Someone I think mentioned Umbrella - great product and we use it as one of four products for our security strategy (av on the client, hosted MTAs for mail security, umbrella dns, and SSL deep inspection on the NGFW)

Our only web proxies we have left where I work are there to cache data from Steam so our QA dept can fetch builds of the game they’re working on from there far quicker without also crippling our general internet access. We’re talking fresh installs taking a matter of 10 minutes rather than several hours per member of staff and TBs of data per month.

We can notice if the server has run out of space (they have 8TB cache in them) or isn’t working as it can end up with Steam traffic taking up 400GB or more in a 24hr period if it isn’t corrected.

The day Valve switched to using HTTPS was the day I had to QoS steam traffic down a lot and take so many complaints. Valve moved it back to HTTP swiftly.

Good luck doing HTTPS decryption on a NGFW

Let’s not forget the best reason to use a proxy server… the ability to screw with people…

One of my favorites for April 1st, a squid proxy script that throws images through mogrify and flips them 180 degrees, then lets the client have them.

While I haven’t been so bold as to put my whole network through it, a quick ipset list of some choice clients has kept me amused for the years I have time to set this up.

So, briefly, proxy servers have become essential in modern networks for enhancing security, privacy, and performance. They act as intermediaries between your device and the internet, allowing for various benefits.
Lots of things can be done in blackhat marketing🏴‍☠️, this is my fave part, however as a fan of Proxy-Store’s FAQ and its services, I don’t feel like i want to talk a lot and recommend navigating to their website, as they have a great source for information
But still curious how do you all see proxy servers fitting into today’s network landscape?:face_with_monocle:

Yes, offloading traffic. We found it useful to use oxylabs proxies to distribute our requests more efficiently. Otherwise we’d just have problems with IP banning.

We use WSAs. They provide much better per-user reporting than anything else. We also do SSL/TLS decryption so we can properly enforce policies and provide block pages. We are also scanning within web requests for malware. We’re not doing it for caching. It also allows us to control internet access through a funnel of sorts rather than permitting everything outbound for web resources. It’s another layer to supplement our security posture.

In my experience, lately more and more software developers seem to have regressed and don’t consider adding proxy settings in their applications anymore, especially not authentication for proxy.
Or, even better, they include those settings, but choose to ignore those settings for 2 or 3 calls within their app, which forces you to open up your firewall to allow clients access to 80/443 directly anyway…
Making it apparent that they’re certainly not testing their development versions with explicit proxies (anymore).

I’ve concluded that either explicit proxies are being phased out more and more (probably, as you mention, because most of the same benefit can be delivered by a NGFW these days) or its part of the overall regression in software quality, but I don’t really understand - when I open a browser on a Windows Terminal server, how would a NGFW or transparent proxy be able to find out which user on the TS tries to open said website?

So NGFW / transparent proxy = policies can only match on client IP address, and every user of a certain TS can access the exact same websites?

How is this acceptable if you need accountability? “Who tried to open this URL?”

Split tunnel + SAAS web proxy

Two main things:
Think of the case where you have a huge network and you have internet access centralized in HQ… the only way for you to restrict access for users that are on remote sites without allowing them to modify the default gateways etc is to push through GPO policies the proxy which would be used for all http and https traffic and not only.

Secondly, scalability. It is true that nowdays firewalls can do almost everything but you can’t scale large network demands with a firewall and enable IPS, proxy, ssl decryption, ra-vpn, web publishing etc. With dedicated proxies you can offload some of the ssl traffic and as well leverage some small benefits of having AsyncOS which is different from the FTD even though it might have the same security feeds.

We use a proxy to limit access to outside sites with a whitelist DB, whose permissions can apply to a user, a group, or a particular machine.