Just discovered the super useful site - for hotels and other hotspots that require a Wifi signup page. Most sites auto-upgrade to TLS.
Cool domain name too - hope they'll hold it.
Technical stuff
Just discovered the super useful site - for hotels and other hotspots that require a Wifi signup page. Most sites auto-upgrade to TLS.
Cool domain name too - hope they'll hold it.
For containers/VMs running on the same physical machine - including containers in the same Pod or in different Pods scheduled using affinity - it would be highly useful to use modern inter-process communication based on shared memory, DMA or virtio instead of keep copying bytes from buffer to kernel buffer to yet another buffer ( 3 copy is the best case - usually far more).
We have the tools - Istio CNI (and others) can inject abstract unix sockets, there are CSI providers that can inject real unix sockets.
Unix sockets - just like Android Binder - can pass file descriptors and shared memory blocks to a trusted per node component - which can further pass it to the destination after applying security policies.
I was looking into this for some time - I worked for many years in Android so I started in the wrong direction attempting to use binder ( which is now included in many kernels ). But I realized Wayland is already there, and it's not a bad generic protocol if you ignore the display parts and the XML.
Both X11 and Wayland use shared buffers on the local machine - but X11 is a monster with an antiquated protocol focused on rendering on the client - and browsers are doing this far better. Wayland was designed for local display and security - but underneath there is a very clean IPC protocol based on buffer passing.
How would it look like in Istio or other cloud meshes ? Ztunnel (or another per-node daemon ) would act as a CSI or as a CNI injecting an unix socket in each Pod. It could use the Wayland binary protocol - but not implement any of the display protocols, just act as a proxy. If it receives a TCP connection - it can just pass the file descriptor after reading the header, but it would mainly act as a proxy for messages containing file/buffer descriptors. Like Android, it can also pass open UDS file descriptors from a container to another, after checking permissions - allowing direct communication.
The nice thing is that even when using VMs instead of containers - there is now support for virtwl in kernel and sommelier - and this would also work for adding stronger policies on a desktop or when communicating with a GPU.
Modern computers have a lot of cores and memory - running K8S clusters with fewer but larger nodes and taking advantage of affinity can allow co-location of the entire stack, avoiding slower network and slower TCP traffic for most communications - while keeping the 'least privilege' and isolation. Of course, a monolith can be slightly faster - but shared memory is far closer in speed compared with TCP.
I've been looking at this for few years in my spare time - most of the code and experiments is obsolete now, but I think using Wayland as a base ( with a clean, display independent proxy) is the right pragmatic solution. And simpler is better - I still like Binder and Android model - wish clouds would add it to their kernels...
Wasted few good hours on this: if you want to move from gnome (and variants like cinnamon) to something else, like sway, and not have to re-enter all the passwords - ignore the man page and all the search results that suggest `--password-store=gnome`.
It is `--password-store=gnome-libsecret` instead.
The rest - installing/starting gnome keyring is still valid, validate with seahorse (i.e. gnome password manager) it is working.
And add a desktop entry with the right flag. "--enable-logging=stderr --v" help to debug, look for key_storage_linux.cc
Found: Mount Block Devices in ChromeOS
Apparently it is possible to change the LXC config and get access to the real VM, which appears to be read-only. Combined with moving devices to the VM there is more control - but still limited by the small number of kernel modules in the VM.
I love the security model - the 'host' just handles display and a number of jailed services, all the apps in the VM with LXC on top. The problem is that it's too restrictive - and the linux apps are still all in the same sandbox with access to each other. Flatpak at least tries to isolate each app - but falls to the same trap that Java and early android did - the apps ask for too many permissions.
I'm sticking with my less efficient setup - docker and pods with explicit mounted volumes, syncthing and remote desktop, with one container per app or dev project - but I've been looking to move from ChromeOS to normal linux set in a similar way.
Found Debootstick - previously used vmdb for same purpose.
Mainstream linux install is stuck in 1992, with a CD-compatible image running an 'installer' that asks 10 questions and installs old games and office applications in case you may need them. And assuming you are lucky to have one computer - which you may want to dual boot with Windows - and will spend quality time manually configuring and and taking care of it.
Raspberri Pi, ChromeOS, Android, OpenWRT use 'image' install, where an image is just copied to disk. The booting 'glue' code can be simple ( EFI, kernel, firmware images ) or fancy - encrypted disk, A/B kernel, verified R/O rootfs. But after boot - you still have about the same rootfs you would run in docker or in a VM.
That's what debootstick and vmdb can automate - a USB stick running a customized image with my SSH authorized keys and minimal set of apps to boot and 'dd' or flash on the few servers/laptops/routers I use.
I use Debian and OpenWRT - the real value of distributions is still patching/building/testing the core libraries and kernels.
OCI images are ubiquitous - there are plenty of automation tools and infra, can be tested, customized - and runs more securely on both home machines in docker and in K8S.
Another recent find is KasmVMC - viewer/client is any browser, more optimized wire protocol ( but I don't think it's real WebRTC - and seems to be X11 only, no wayland yet). But most important the maintain nightly builds of common applications as docker images - install on any server or in K8S and use on any laptop. The tricky part remains getting some ACME certificates - if only it had an Istio gateway in front... Very curious how it'll impact performance - but still looking for an equivalent using real WebRTC/Wayland and with same docker-image set.
Good doc - useful for example if you migrate an 'istioctl install' into helm, in particular Service resources which can't be deleted without losing the external IP.
https://jacky-jiang.medium.com/import-existing-resources-in-helm-3-e27db11fd467
USB charging info - going all the way back to LPT/COM ports, and lots of details on the protocols.
XDS is a gRPC-based protocol for pushing configs and updates. The data is represented by 'resource type' and 'resource names', and values are typically protocol buffers, but can be JSON or any other format.
K8S defines a similar API - based on Json, also defined as protocol buffers or OpenAPI schema.
XDS and K8S APIs are quite similar - and serve a similar purpose, to allow controllers and other apps to get real-time notifications when anything changes in the config database. K8S also supports 'update/delete' - which are not present in 'official' XDS, but relatively easy to extend or support as a separate gRPC method.
It is possible to write a one or 2 way bridge between the two protocols: it would allow a simpler model for watching K8S resources compared to 'list and watch', and likely provide better performance. On the opposite side, it would allow kubectl to be used to debug and interact with XDS servers.
In general, there are many similar protocols using GET/LIST and with some form of 'watch' or events - creating bridges to allow users to pick one client library and interact with different protocols seems better than current model of having one heavy client library for each protocol.
A few notes on K8S Events.
K8S at its core is a database of configs - with a stable and well defined schema. Different applications (controllers) use the database to perform actions - run workloads, setup networking and storage. The interface to the database is nosql - with a 'watch' interface similar to pubsub/mqtt that allow controllers to operate with very low latency, on every change.
Most features are defined in terms of CRDs - the database object, with metadata (name, namespace, labels, version ), data and status. The status is used by controllers to write info about how the object was actuated, and by users to find out. For example a Pod represents a workload - the controllers will write the IP of the pod and 'running' in status. Other controllers will use this information to update other object - like EndpointSlice.
K8S also has a less used and more generic pubsub mechanism - the Event, for 'general purpose' events.
Events, logs and traces are similar in structure and use - but different in persistence and on how the user interacts with them. While 'debugging' is the most obvious use case, analyzing and using them in code, to extract information and trigger actions is where the real power lies.
The CRD 'status' is persistent and treated as a write to the object - all watchers will be notified, the writing is quite expensive. Logs are batched and generally written to specialized storage, and deleted after some time - far cheaper but harder to use programmatically, since each log system has a different query API.
In K8S events have 1h default storage - far less than logs, which are typically stored for weeks, or Status - which is stored as long as the object lives. K8S implementation may also optimize the storage - keep them in RAM longer or using optimized storage mechanisms. In GKE (and likely others) they are also logged to stackdriver - and may have longer persistence.
Events are associated with other objects using 'involvedObject' field, which links the event to an object, and is used in 'kubectl describe'. This pattern is similar to the new Gateway 'policy attachment' - where config, overrides or defaults can be are attached to other resources.
```
# Selectors filter on server side.
kubectl get events -A --field-selector involvedObject.kind!=Pod
kubect get events -A --watch
```
Watching the events can be extremely instructive and reveal a lot of internal problems - Status also includes errors, but you need to know to watch a particular object. Links: