Status Update
Comments
ge...@google.com <ge...@google.com> #2
Hi Josh, friendly ping for an update on progress
fi...@elucidate.co <fi...@elucidate.co> #3
i think the TL;DR is that this is stuck behind investigating the current lab outages, which are more urgent.
pending CL is
fi...@elucidate.co <fi...@elucidate.co> #4
fi...@elucidate.co <fi...@elucidate.co> #5
ja...@google.com <ja...@google.com> #6
Hmm, gerrit-watcher didn't post the CLs on this bug, so I guess I'll do so manually:
Throughput numbers from a device:
none brotli lz4
USB 3.0 120 110 190
USB 2.0 38 75 63
I'm seeing identical throughput with brotli quality 0 and 1 (110MB/s end to end), although there's some giant low hanging fruit to compress multiple files at once.
zstd isn't in the tree yet (I started the ball rolling on that, but no idea when it'll end up getting merged), but I'll take a look at it eventually, probably when I get around to implementing compression for generic streams in adb (
This is probably good enough to call this fixed for now.
Description
We are wondering: we have an LoadBalancer as ingress for our kubernetes services, and this loadbalancer has a lot of open ports that are not going anywhere and are not configured on kubernetes. There is also no indication about these open ports on the firewall:
-------
% nmap -sV
Starting Nmap 7.60 (
Stats: 0:00:47 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan
Service scan Timing: About 94.74% done; ETC: 09:53 (0:00:02 remaining)
Nmap scan report for
Host is up (0.033s latency).
rDNS record for
Not shown: 980 filtered ports
PORT STATE SERVICE VERSION
25/tcp closed smtp
43/tcp open tcpwrapped
80/tcp open http nginx 1.15.11
110/tcp open tcpwrapped
143/tcp open tcpwrapped
443/tcp open ssl/http nginx 1.15.11
465/tcp open tcpwrapped
587/tcp open tcpwrapped
700/tcp open tcpwrapped
993/tcp open tcpwrapped
995/tcp open tcpwrapped
3389/tcp open tcpwrapped
5222/tcp open tcpwrapped
5432/tcp open tcpwrapped
5900/tcp open tcpwrapped
5901/tcp open tcpwrapped
8080/tcp open http-proxy
8085/tcp open tcpwrapped
8099/tcp open tcpwrapped
9200/tcp open tcpwrapped
-------
Trying to establish a connection looks like this:
-------
% telnet
Trying 35.241.60.142...
Connected to
Escape character is '^]'.
Connection closed by foreign host.
-------
So the connection is established and then immediately closed.
According to the configuration on kubernetes, only 443 and 80 should be open:
-------
% kubectl -n staging get ingress
NAME HOSTS ADDRESS PORTS AGE
efi-staging-ingress
-------
I found some indications that this could be some strange default-behavior (
-------
% gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
datalab-network-allow-ssh datalab-network INGRESS 1000 tcp:22 False
default-allow-icmp default INGRESS 65534 icmp False
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp False
default-allow-rdp default INGRESS 65534 tcp:3389 False
default-allow-ssh default INGRESS 65534 tcp:22 False
gke-efi-2838372d-all default INGRESS 1000 esp,ah,sctp,tcp,udp,icmp False
gke-efi-2838372d-ssh default INGRESS 1000 tcp:22 False
gke-efi-2838372d-vms default INGRESS 1000 tcp:1-65535,udp:1-65535,icmp False
k8s-fw-l7--3ff9710af2cf4c3f default INGRESS 1000 tcp:30000-32767 False
-------
If i list all ports that are allowed for source
-------
% gcloud compute firewall-rules list --format=json | jq '.[] | select(.sourceRanges[] | contains("
[
"22"
]
null
[
"3389"
]
[
"22"
]
-------
So there is no explanation why all the other ports are open as well (9200 for example). I do understand that this is not-really-bad-but-just-ugly as the connections are bounced on the balancer, but from a security perspective this at least does look very bad. Where / how can we restrict the LoadBalancer to only open ports that we need, which would be 443 and 80?