close
  • Pl chevron_right

    Alejandro Piñeiro: v3dv status update 2022-05-16

    news.movim.eu / PlanetGnome · 4 days ago - 09:48 · 4 minutes

We haven’t posted updates to the work done on the V3DV driver since
we announced the driver becoming Vulkan 1.1 Conformant .

But after reaching that milestone, we’ve been very busy working on more improvements, so let’s summarize the work done since then.

Multisync support

As mentioned on past posts, for the Vulkan driver we tried to focus as much as possible on the userspace part. So we tried to re-use the already existing kernel interface that we had for V3D, used by the OpenGL driver, without modifying/extending it.

This worked fine in general, except for synchronization. The V3D kernel interface only supported one synchronization object per submission. This didn’t properly map with Vulkan synchronization, which is more detailed and complex, and allowed defining several semaphores/fences. We initially handled the situation with workarounds, and left some optional features as unsupported.

After our 1.1 conformance work, our colleage Melissa Wen started to work on adding support for multiple semaphores on the V3D kernel side. Then she also implemented the changes on V3DV to use this new feature. If you want more technical info, she wrote a very detailed explanation on her blog ( part1 and part2 ).

For now the driver has two codepaths that are used depending on if the kernel supports this new feature or not. That also means that, depending on the kernel, the V3DV driver could expose a slightly different set of supported features.

More common code – Migration to the common synchronization framework

For a while, Mesa developers have been doing a great effort to refactor and move common functionality to a single place, so it can be used by all drivers, reducing the amount of code each driver needs to maintain.

During these months we have been porting V3DV to some of that infrastructure, from small bits (common VkShaderModule to NIR code), to a really big one: common synchronization framework.

As mentioned, the Vulkan synchronization model is really detailed and powerful. But that also means it is complex. V3DV support for Vulkan synchronization included heavy use of threads. For example, V3DV needed to rely on a CPU wait (polling with threads) to implement vkCmdWaitEvents, as the GPU lacked a mechanism for this.

This was common to several drivers. So at some point there were multiple versions of complex synchronization code, one per driver. But, some months ago, Jason Ekstrand refactored Anvil support and collaborated with other driver developers to create a common framework. Obviously each driver would have their own needs, but the framework provides enough hooks for that.

After some gitlab and IRC chats, Jason provided a Merge Request with the port of V3DV to this new common framework, that we iterated and tested through the review process.

Also, with this port we got timelime semaphore support for free. Thanks to this change, we got ~1.2k less total lines of code (and have more features!).

Again, we want to thank Jason Ekstrand for all his help.

Support for more extensions:

Since 1.1 got announced the following extension got implemented and exposed:

  • VK_EXT_debug_utils
  • VK_KHR_timeline_semaphore
  • VK_KHR_create_renderpass2
  • VK_EXT_4444_formats
  • VK_KHR_driver_properties
  • VK_KHR_16_bit_storage and VK_KHR_8bit_storage
  • VK_KHR_imageless_framebuffer
  • VK_KHR_depth_stencil_resolve
  • VK_EXT_image_drm_format_modifier
  • VK_EXT_line_rasterization
  • VK_EXT_inline_uniform_block
  • VK_EXT_separate_stencil_usage
  • VK_KHR_separate_depth_stencil_layouts
  • VK_KHR_pipeline_executable_properties
  • VK_KHR_shader_float_controls
  • VK_KHR_spirv_1_4

If you want more details about VK_KHR_pipeline_executable_properties, Iago wrote recently a blog post about it ( here )

Android support

Android support for V3DV was added thanks to the work of Roman Stratiienko, who implemented this and submitted Mesa patches. We also want to thank the Android RPi team, and the Lineage RPi maintainer (Konsta) who also created and tested an initial version of that support, which was used as the baseline for the code that Roman submitted. I didn’t test it myself (it’s in my personal TO-DO list), but LineageOS images for the RPi4 are already available.

Performance

In addition to new functionality, we also have been working on improving performance. Most of the focus was done on the V3D shader compiler, as improvements to it would be shared among the OpenGL and Vulkan drivers.

But one of the features specific to the Vulkan driver (pending to be ported to OpenGL), is that we have implemented double buffer mode, only available if MSAA is not enabled. This mode would split the tile buffer size in half, so the driver could start processing the next tile while the current one is being stored in memory.

In theory this could improve performance by reducing tile store overhead, so it would be more benefitial when vertex/geometry shaders aren’t too expensive. However, it comes at the cost of reducing tile size, which also causes some overhead on its own.

Testing shows that this helps in some cases (i.e the Vulkan Quake ports) but hurts in others (i.e. Unreal Engine 4), so for the time being we don’t enable this by default. It can be enabled selectively by adding V3D_DEBUG=db to the environment variables. The idea for the future would be to implement a heuristic that would decide when to activate this mode.

FOSDEM 2022

If you are interested in watching an overview of the improvements and changes to the driver during the last year, we made a presention in FOSDEM 2022:
“v3dv: Status Update for Open Source Vulkan Driver for Raspberry Pi
4”

  • wifi_tethering open_in_new

    This post is public

    blogs.igalia.com /apinheiro/2022/05/v3dv-status-update-2022-05-16/

  • Pl chevron_right

    Matthew Garrett: Can we fix bearer tokens?

    news.movim.eu / PlanetGnome · 4 days ago - 07:48 · 6 minutes

Last month I wrote about how bearer tokens are just awful , and a week later Github announced that someone had managed to exfiltrate bearer tokens from Heroku that gave them access to, well, a lot of Github repositories. This has inevitably resulted in a whole bunch of discussion about a number of things, but people seem to be largely ignoring the fundamental issue that maybe we just shouldn't have magical blobs that grant you access to basically everything even if you've copied them from a legitimate holder to Honest John's Totally Legitimate API Consumer.

To make it clearer what the problem is here, let's use an analogy. You have a safety deposit box. To gain access to it, you simply need to be able to open it with a key you were given. Anyone who turns up with the key can open the box and do whatever they want with the contents. Unfortunately, the key is extremely easy to copy - anyone who is able to get hold of your keyring for a moment is in a position to duplicate it, and then they have access to the box. Wouldn't it be better if something could be done to ensure that whoever showed up with a working key was someone who was actually authorised to have that key?

To achieve that we need some way to verify the identity of the person holding the key. In the physical world we have a range of ways to achieve this, from simply checking whether someone has a piece of ID that associates them with the safety deposit box all the way up to invasive biometric measurements that supposedly verify that they're definitely the same person. But computers don't have passports or fingerprints, so we need another way to identify them.

When you open a browser and try to connect to your bank, the bank's website provides a TLS certificate that lets your browser know that you're talking to your bank instead of someone pretending to be your bank. The spec allows this to be a bi-directional transaction - you can also prove your identity to the remote website. This is referred to as "mutual TLS", or mTLS, and a successful mTLS transaction ends up with both ends knowing who they're talking to, as long as they have a reason to trust the certificate they were presented with.

That's actually a pretty big constraint! We have a reasonable model for the server - it's something that's issued by a trusted third party and it's tied to the DNS name for the server in question. Clients don't tend to have stable DNS identity, and that makes the entire thing sort of awkward. But, thankfully, maybe we don't need to? We don't need the client to be able to prove its identity to arbitrary third party sites here - we just need the client to be able to prove it's a legitimate holder of whichever bearer token it's presenting to that site. And that's a much easier problem.

Here's the simple solution - clients generate a TLS cert. This can be self-signed, because all we want to do here is be able to verify whether the machine talking to us is the same one that had a token issued to it. The client contacts a service that's going to give it a bearer token. The service requests mTLS auth without being picky about the certificate that's presented. The service embeds a hash of that certificate in the token before handing it back to the client. Whenever the client presents that token to any other service, the service ensures that the mTLS cert the client presented matches the hash in the bearer token. Copy the token without copying the mTLS certificate and the token gets rejected. Hurrah hurrah hats for everyone.

Well except for the obvious problem that if you're in a position to exfiltrate the bearer tokens you can probably just steal the client certificates and keys as well, and now you can pretend to be the original client and this is not adding much additional security. Fortunately pretty much everything we care about has the ability to store the private half of an asymmetric key in hardware (TPMs on Linux and Windows systems, the Secure Enclave on Macs and iPhones, either a piece of magical hardware or Trustzone on Android) in a way that avoids anyone being able to just steal the key.

How do we know that the key is actually in hardware? Here's the fun bit - it doesn't matter. If you're issuing a bearer token to a system then you're already asserting that the system is trusted. If the system is lying to you about whether or not the key it's presenting is hardware-backed then you've already lost. If it lied and the system is later compromised then sure all your apes get stolen, but maybe don't run systems that lie and avoid that situation as a result?

Anyway. This is covered in RFC 8705 so why aren't we all doing this already? From the client side, the largest generic issue is that TPMs are astonishingly slow in comparison to doing a TLS handshake on the CPU. RSA signing operations on TPMs can take around half a second, which doesn't sound too bad, except your browser is probably establishing multiple TLS connections to subdomains on the site it's connecting to and performance is going to tank. Fixing this involves doing whatever's necessary to convince the browser to pipe everything over a single TLS connection, and that's just not really where the web is right at the moment. Using EC keys instead helps a lot (~0.1 seconds per signature on modern TPMs), but it's still going to be a bottleneck.

The other problem, of course, is that ecosystem support for hardware-backed certificates is just awful. Windows lets you stick them into the standard platform certificate store, but the docs for this are hidden in a random PDF in a Github repo . Macs require you to do some weird bridging between the Secure Enclave API and the keychain API. Linux? Well, the standard answer is to do PKCS#11, and I have literally never met anybody who likes PKCS#11 and I have spent a bunch of time in standards meetings with the sort of people you might expect to like PKCS#11 and even they don't like it. It turns out that loading a bunch of random C bullshit that has strong feelings about function pointers into your security critical process is not necessarily something that is going to improve your quality of life, so instead you should use something like this and just have enough C to bridge to a language that isn't secretly plotting to kill your pets the moment you turn your back.

And, uh, obviously none of this matters at all unless people actually support it. Github has no support at all for validating the identity of whoever holds a bearer token. Most issuers of bearer tokens have no support for embedding holder identity into the token. This is not good! As of last week, all three of the big cloud providers support virtualised TPMs in their VMs - we should be running CI on systems that can do that, and tying any issued tokens to the VMs that are supposed to be making use of them.

So sure this isn't trivial. But it's also not impossible, and making this stuff work would improve the security of, well, everything. We literally have the technology to prevent attacks like Github suffered. What do we have to do to get people to actually start working on implementing that?

comment count unavailable comments
  • Pl chevron_right

    Sophie Herold: Pika Backup 0.4 Released with Schedule Support

    news.movim.eu / PlanetGnome · 5 days ago - 00:00 · 1 minute

Pika Backup is an app focused on backups of personal data. It’s internally based on BorgBackup and provides fast incremental backups.

Pika Backup version 0.4 has been released today. This release wraps up a year of development work. After the huge jump to supporting scheduled backups and moving to GTK 4 and Libadwaita, I am planning to go back to slightly more frequent and smaller releases. At the same time, well-tested and reliable releases will remain a priority of the project.

Release Overview

The release contains 41 resolved issues, 27 changelog entries, and a codebase that despite many cleanups nearly doubled. Here is a short rundown of the changes.

      • Ability to schedule regular backups.
      • Support for deleting old archives.
      • Revamped graphical interface including a new app icon.
      • Better compression and performance for backups.
      • Several smaller issues rectified.

You can find a more complete list of changes in Pika’s Changelog .

pika-backup-1.png

Thanks!

Pika Backup’s backbone is the BorgBackup software. If you want to support them you can check out their funding page . The money will be well spent.

A huge thank you to everyone who helped with this release. Especially Alexander, Fina, Zander, but also everyone else, the borg maintainers, the translators, the beta testers, people who reported issues or contributed ideas, and last but not least, all the people that gave me so much positive and encouraging feedback!

Resources

  • wifi_tethering open_in_new

    This post is public

    blogs.gnome.org /sophieh/2022/05/15/pika-backup-0-4-released-with-schedule-support/

  • Pl chevron_right

    Jonathan Blandford: Crosswords 0.3

    news.movim.eu / PlanetGnome · 6 days ago - 19:33 · 2 minutes

I’m pleased to announce Crosswords 0.3. This is the first version that feels ready for public consumption. Unlike the version I announced five months ago, it is much more robust and has some key new features.

New in this version:

  • Available on flathub: After working on it out of the source tree for a long while, I finally got the flatpaks working. Download it and try it out—let me know what you think! I’d really appreciate any sort of feedback, positive or negative.

  • Puzzle Downloaders: This is a big addition. We now support external downloaders and puzzle sets. This means it’s possible to have a tool that downloads puzzles from the internet. It also lets us ship sets of puzzles for the user to solve. With a good set of puzzles, we could even host them on gnome.org. These can be distributed externally, or they can be distributed via flatpak extensions (if not installing locally). I wrapped xword-dl and puzzlepull with a downloader to add some newspaper downloaders, and I’m looking to add more original puzzles shortly.

  • Dark mode: I thought that this was the neatest feature in GNOME 42. We now support both light and dark mode and honor the system setting. CSS is also heavily used to style the app allowing for future visual modifications and customizations. I’m interested in allowing  css customization on a per-puzzle set basis.

  • Hint Button: This is a late addition. It can be used to suggest a random word that fits in the current row. It’s not super convenient, but I also don’t want to make the game too easy! We use Peter Broda’s wordlist as the basis for this.
  • .puz support: Internally we use the unencumbered .ipuz format . This is a pretty decent format and supports a wide-variety of crossword features. But there’s a lot of crosswords out there that use the .puz format and a I know people have large collections of puzzles in that format. I wrote a simple convertor to load these.

Next steps

I hope to release this a bit more frequently now that we have gotten to this stage. Next on my immediate list:

  • Revisit clue formats; empirically crossword files in the wild play a little fast-and-loose with escaping and formatting (eg. random entities and underline-escaping).
  • Write a Puzzle Set manager that will let you decide which downloaders/puzzles to show, as well as launch gnome-software to search for more.
  • Document the external Puzzle Set format to allow others to package up games.
  • Fix any bugs that are found.

I also plan to work on the Crossword editor and get that ready for a release on flathub. The amazing-looking AdwEntryRow should make parts of the design a lot cleaner.

But, no game is good without being fun! I am looking to expand the list of puzzles available. Let me know if you write crosswords and want to include them in a puzzle set.

Thanks

I couldn’t have gotten this release out without lots of help. In particular:

  • Federico, for helping to  refactor the main code and lots of great advice
  • Nick, for explaining how to get apps into flathub
  • Alexander, for his help with getting toasts working with a new behavior
  • Parker, for patiently explaining to me how the world of Crosswords worked
  • The folks on #gtk for answering questions and producing an excellent toolkit
  • And most importantly Rosanna, for testing everything and for her consistent cheering and support

Download on FLATHUB

  • wifi_tethering open_in_new

    This post is public

    blogs.gnome.org /jrb/2022/05/14/crosswords-0-3/

  • Pl chevron_right

    Ole Aamot: Voice

    news.movim.eu / PlanetGnome · 6 days ago - 12:47 · 1 minute

Voice is a new Public Voice Communication Software being built on GNOME 42.

Voice will let you listen to and share short, personal and enjoyable Voicegrams via electronic mail and on the World Wide Web by GNOME executives, employees and volunteers. Xiph.org Ogg Vorbis is a patent-free audio codec that more and more Free Software programs, including GNOME Voice ( https://www.gnomevoice.org/ ) have implemented, so that you can listen to Voicegram recordings with good/fair recording quality by accessing the Voicegram file $HOME/Music/GNOME.ogg in the G_USER_DIRECTORY_MUSIC folder in Evolution or Nautilus.

Currently it records sound waves from the live microphone into $HOME/Music/GNOME.ogg (or $HOME/Musikk/GNOME.ogg on Norwegian bokmål systems) and plays back an audio stream from api.perceptron.stream:8000/56.ogg simultaneously on GNOME 42.

The second Voice 0.0.2 release with live microphone recording into $HOME/Music/GNOME.ogg and a concert experience with Sondre Lerche (Honolulu, Hawaii) and presenter Neil McGovern (Executive Director, GNOME Foundation) is available from https://download.gnome.org/sources/gnome-voice/0.0/gnome-voice-0.0.2.tar.xz

More information about Voice is available on https://wiki.gnome.org/Apps/Voice and http://www.gnomevoice.org/

  • Pl chevron_right

    Christian Hergert: Builder GTK 4 Porting, Part IV

    news.movim.eu / PlanetGnome · 6 days ago - 00:05 · 1 minute

This week was a little slower as I was struggling with an adjustment to my new medication. Things progress nonetheless.

Text Editor

I spent a little time this week triaging some incoming Text Editor issues and feature requests. I’d really like this application to get into maintenance mode soon because I have plenty of other projects to maintain.

  • Added support for gnome-text-editor - to open a file from standard input, even if you’re communicating to a single instance application from terminal.
  • Branch GNOME 42 so we can add new strings.
  • Fix a no-data-loss crash during shutdown.

Template-GLib

  • Fix template evaluation on macOS.
  • Make boolean expression precedence more predictable.
  • Cleanup output of templates with regards to newlines.

libpanel

  • Propagate modified page state to tabs
  • Some action tweaks to make things more keyboard shortcut friendly.

Builder

  • Merged support for configuration editing from Georges .
  • Add lots of keybindings using our new keybinding engine.
  • Track down and triage that shortcut controllers do not capture/bubble to popovers. Added workarounds for common popovers in Builder.
  • Teach Builder to load keybindings from plugins again and auto-manage them.
  • Lots of tweaks to the debugger UI and where widgetry is placed.
  • Added syntax highlighting for debugger disassembly.
  • Added menus and toggles for various logging and debugger hooks. You can get a breakpoint on g_warning() or g_critical() by checking a box.
  • Ability to select a build target as the default build target finally.
  • More menuing fixes all over the place, particularly with treeviews and sourceviews.
  • Fix keyboard navigation and activation for the new symbol-tree
  • Port the find-other-file plugin to the new workspace design which no longer requires using global search.
  • GTK 4 doesn’t appear to scroll to cells in textview as reliably as I’d like, so I dropped the animation code in Builder and we jump strait to the target before showing popovers.
  • Various work on per-language settings overrides by project.
  • Drop the Rust rls plugin as we can pretty much just rely on rust-analyzer now.
  • Lots of CSS tweaks to make things fit a bit better with upcoming GNOME styling.
  • Fix broken dialog which prevented SDK updates from occurring with other dependencies.

A screenshot of builder's find-other-file plugin

A screenshot of Builder's debugger

A screenshot showing the build target selection dialog

A screenshot of the run menu A screenshot of the logging menu
  • wifi_tethering open_in_new

    This post is public

    blogs.gnome.org /chergert/2022/05/14/builder-gtk-4-porting-part-iv/

  • Pl chevron_right

    Sam Thursfield: Trying out systemd’s Portable Services

    news.movim.eu / PlanetGnome · 7 days ago - 10:15 · 3 minutes

I recently acquired a monome grid , a set of futuristic flashing buttons which can be used for controlling software, making music, and/or playing the popular 90’s game Lights Out.

There’s no sound from the device itself, all it outputs is a USB serial connection. Software instruments connect to the grid to receive button presses and control the lights via the widely-supported protocol Open Sound Control protocol. I am using monome-rs to convert the grid signals into MIDI, send them to Bitwig Studio and make interesting noises, which I am excited to share with you in the future, but first we need to talk about software packaging.

Monome provide a system service named serialosc , which connects to the grid hardware (over USB-serial) and provides the Open Sound Control endpoint. This program is not packaged in by Linux distributions and that is fine, it’s rather niche hardware and distro maintainers shouldn’t have support every last weird device. On the other hand, it’s rather crude to build it from source myself, install it into /usr/local, add a system service, etc. etc. Is there a better way?

First I tried bundling serialosc along with my control app using Flatpak , which is semi-possible – you can build and run the service, but it can’t see the device unless you set the super-insecure “–devices=all” mode, and it still can’t detect the device because udev is not available , so you would have to hardcode the driver name /dev/ttyACM0 and hotplug no longer works … basically this is not a good option.

Then I read about systemd’s Portable Services . This is a way to ship system services in containers, which sounds like the correct way to treat something like serialosc. So I followed the portable walkthrough and within a couple of hours the job was done: here’s a PR that could add Portable Services packaging upstream: https://github.com/monome/serialosc/pull/62

I really like the concept here, it has some pretty clear advantages as a way to distribute system services:

  • Upstreams can deliver Linux binaries of services in a (relatively) distro-independent way
  • It works on immutable-/usr systems like Fedora Silverblue and Endless OS
  • It encourages containerized builds, which can lower the barrier for developing local improvements.

This is quite a new technology and I have mixed feelings about the current implementation. Firstly, when I went to screenshot the service for this blog post, i discovered it had broken:

screenshot of terminal showing "Failed at step EXEC - permission denied"

Fixing the error was a matter of – disabling SELinux. On my Fedora 35 machine the SELinux profile seems totally unready for Portable Services, as evidenced in this bug I reported , and this similar bug someone else reported a year ago which was closed without fixing . I accept that you get a great OS like Fedora for free in return for being a beta tester of SELinux, but this suggests portable services are not yet ready widespread use.

That’s better:

I used the recommended mkosi build to create the serialosc container, which worked as documented and was pretty quick. All mkosi operations have to run as root . This is unfortunate and also interacts badly with the Git safe.directory feature (the source trees are owned by me, not root, so Git’s default config raises an error).

It’s a little surprising the standard OCI container format isn’t supported, only disk images or filesystem trees. Initially I built a 27MB squashfs, but this didn’t work as the container rootfs has to be read/write (another surprise) so I deployed a tree of files in the end. The service container is Fedora-based and comes out at 76MB across 5,200 files – that’s significant bloat around a service implemented in a few 1000 lines of C code. If mkosi supported Alpine Linux as base OS we could likely reduce this overhead significantly.

The build/test workflow could be optimised but is already workable, the following steps take about 30 seconds with a warm mkosi.cache directory:

  • sudo mkosi -t directory -o /opt/serialosc --force
  • sudo portablectl reattach --enable --now /opt/serialosc --profile trusted

We are using “Trusted” profile because the service needs access to udev and /dev, by the way.

All in all, the core pieces are already in place for a very promising new technology that should make it easier for 3rd parties to provide Linux system-level software in a safe and convenient way, well done to the systemd team for a well executed concept. All it lacks is some polish around the tooling and integration.

  • Pl chevron_right

    Marcus Lundblad: Maps Spring Cleaning

    news.movim.eu / PlanetGnome · 7 days ago - 09:20 · 2 minutes

Thought it was time to share some news on Maps again.


After the 42.0 release I have been putting down some time to do some spring cleaning in Maps to slim down a little bit on the code.

This would also mean less stuff to care about later on when porting to GTK4.

First we have the “no network” view.

We used to use the network monitor functionality from GIO to determine if the machine has a usable connection able to reach  the public internet to avoid showing incomplete map tile data (where some parts might be cached from earlier, while others are missing).

Unfortunately there has been some issues with some setups not playing well NetworkManager (such as some third-party VPN software). So we have had several bug reports around this over the years.

At one point we even added a CLI option to bypass network checking as a sort of “workaround”. But now we decided to remove this (and along with some widgetry), and just rely on the user having a useful connection. The search and route functionality should still behave well and show error messages they were unable to read from the connections.

Moreover, we dropped the dependency on GOA (gnome-online-accounts), and the remaining support for user check-in using Foursquare, as this has been pretty flaky (possibly due to quota issues, or something like that. Facebook support has been removed already since a few releases (and prior to that logging in to Facebook using GOA hasn't been working for many years).

Next thing is the process for signing up with an OpenStreetMap account for editing points-of-interest.

Previously we had an implementation (which by the way was my first large contribution to Maps back in 2015) which was based on a literal translation for Java to JS of the “signpost” library used by JOSM which basically “programmatically runs” the HTML forms for requesting access to an OAuth access token when signing in with the supplied user name and password. It then presents the verification code using a WebKit web view.

This has a few problems: It involves handling passwords inside the stand-alone application, which goes against best practices for OAuth. Furthermore it implies a dependency on WebKitGTK (the GTK WebKit wrapper), which is yet another dependency that needs porting to GTK4.

So now with the new implementation we “divert” off to the user's default browser presenting them with logging in (or, if they're already logged in to OSM in the browser session (cookies) they will directly be prompted to grant access to the “GNOME Maps” application without giving credentials again. This implementation is pretty similar to how Cawbird handles signing in to Twitter.


There is also some new visual updates.

The search results now has icons for additional place types, such as fast food joints, theaters, dog parks, drink shops. I did some scouting around in the GNOME icon library 😄





Also, as a continuation of one of the last features added for 42.0, where Maps remembers if the scale stock was shown or hidden last you ran Maps when starting again. I realized the feature hiding the scale is a bit hidden (you need to look for it in the keyboard shorts menu), and also it is not possible to access on a touch-only device (such as on the Pine phone). So I added a checkbox for it under the layers menu (where I think it fits in).


And that's about it for this time.

Next time, I think there will be some more under-the-hood changes.

  • wifi_tethering open_in_new

    This post is public

    ml4711.blogspot.com /2022/05/maps-spring-cleaning.html