• chevron_right

      Flathub Blog: Introducing App Brand Colors

      news.movim.eu / PlanetGnome · Yesterday - 00:00 · 2 minutes

    We're gearing up to launch curated banners on the Flathub home page! However, before we can do that there's one more blocker: Banners need a background color for each app, and many apps don't provide this metadata yet. This is why today we're expanding our MetaInfo quality guidelines and quality checks on the website; If you haven't yet, please add these colors to your app's MetaInfo file using the <branding/> appstream tag, and read on to learn more about brand colors.

    What are brand colors?

    App brand colors are an easy and effective way for app developers to give their listing a bit more personality in app stores. In combination with the app icon and name, they allow setting a tone for the app without requiring a lot of extra work, unlike e.g. creating and maintaining additional image assets.

    GNOME Software Explore view with brand colors on app banners

    Why now?

    This idea was first implemented in elementary AppCenter, and later standardized as part of the AppStream specification .

    While it has been in AppStream itself for a few years , it was unfortunately not possible for Flathub's backend to pick it up until the recent port to libappstream . This is why many apps are still not providing this metadata—even if it was available from the app side we were unable to display it until now.

    elementary AppCenter with brand colors on app banners

    Now that we can finally pick these colors up from AppStream MetaInfo files, we want to make use of them—and they are essential for the new banners.

    Adding brand colors

    Apps are expected to provide two different brand colors for light and dark. Here's an example of a MetaInfo file in the wild including brand colors.

    This is the snippet you need to include in your MetaInfo file:

    <branding>
    <color type="primary" scheme_preference="light">#faa298</color>
    <color type="primary" scheme_preference="dark">#7f2c22</color>
    </branding>

    In choosing the colors, try to make sure the colors work well in their respective context (e.g. don't use a light yellow for the dark color scheme), and look good as a background behind the app icon (e.g. avoid using exactly the same color to maintain contrast). In most cases it's recommended to pick a lighter tint of a main color from the icon for the light color scheme, and a darker shade for the dark color scheme. Alternatively you can also go with a complementary color that goes well with the icon's colors.

    Three examples of good/bad brand colors

    What's next?

    Today we've updated the MetaInfo quality guidelines with a new section on app brand colors . Going forward, brand colors will be required as part of the MetaInfo quality review.

    If you have an app on Flathub, check out the guidelines and update your MetaInfo with brand colors as soon as possible. This will help your app look as good as possible, and will make it eligible to be featured when the new banners ship. Let's make Flathub a more colorful, exciting place to find new apps!

    • wifi_tethering open_in_new

      This post is public

      docs.flathub.org /blog/introducing-app-brand-colors

    • chevron_right

      Jussi Pakkanen: Tagged PDFs with CapyPDF now sort of possible

      news.movim.eu / PlanetGnome · 3 days ago - 19:50 · 1 minute

    There are many open source PDF generators available. Unfortunately they all have some limitations when it comes to generating tagged PDFs

    • Cairo does not support tagged PDFs at all
    • LaTeX can create tagged PDFs, but obviously only out of LaTeX documents
    • Scribus does not support tagged PDF
    • LibreOffice does support tagged PDF generation, but its code is not available as a standalone library, it can only be used via LibreOffice
    • HexaPDF does not support tagged PDFs (though they seem to be on the roadmap), but it is implemented in Ruby so most projects can't use it
    • Firefox can print pages to PDF, but the result is not tagged, even though the PDF document structure model is almost exactly the same as in HTML

    There does not seem to be a library that provides for all of this with a "plain C" API that can be used to easily generated tagged PDFs using almost any programming language.

    There still isn't, but at least now CapyPDF can generate simple tagged PDF documents. A sample document can be downloaded via this link. Here is a picture showing the document structure in Acrobat Pro.

    It should also work in things like screen readers and other accessibility tools, but I have not tested it.

    None of this is exposed in the C API, because this has a fairly large API surface and I have not yet come up with a good way to represent it.


    • wifi_tethering open_in_new

      This post is public

      nibblestew.blogspot.com /2024/02/tagged-pdfs-with-capypdf-now-sort-of.html

    • chevron_right

      Dorothy Kabarozi: Conversations in Open Source: Insights from Informal chats with Open Source contributors.

      news.movim.eu / PlanetGnome · 5 days ago - 05:06 · 2 minutes

    Introduction

    Open source embodies collaboration, innovation, and accessibility within the technological realm. Seeking personal insights behind the collaborative efforts, I engaged in conversations with individuals integral to the open source community, revealing the diversity, challenges, and impacts of their work.

    Conversations Summary

    Venturing beyond my comfort zone, I connected with seasoned open source contributors, each offering unique perspectives and experiences. Their roles varied from project maintainers to mentors, working on everything from essential libraries to innovative technologies.

    • Das: Shared valuable insights on securing roles in open source, including resources for applications and tips for academic writing and conference speaking. The best part with Das was that she also reviewed my resume and shared many ways i could make it outstanding and shared templates to use for this.We had a really great chat.
    • Samuel: A seasoned C/C++ programmer working mainly on open-source user-space driver development.He was kind enough to share his 20 years long journey on how he started working with open source and how he loves working with low level Hardware.He also commended Outreachy as a great opportunity and my contributions with GNOME in QA testing.He encouraged me to apply for roles in the company he’s working with,and highlighted “Even if they say NO now, next time they will say YES”. Samuel also encouraged me to find my passion and this will guide me to learn faster,create my personal brand and encouraged me to submit some conference talks.
    • Dustin: Shared his 20 year journey and we mostly talked about Programming and software engineering in general . Highlighted the significance of networking and adaptability to learn quickly in open source.He shared a story how he “printed out code on one of his first jobs and learnt a skill of figuring out early what you don’t need to understand when faced with a big code base”. This is one skill I needed at the start instead of drowning in documentation trying to understand the project and where to start.
    • Stefan : Discussed his transition from a GSOC participant to a mentor,shared opensource job links , commended Outreachy as a big plus. He highlighted not to set yourself up by mental blocking that you can’t do anything, because you can.He encouraged to submit talks at conferences, network and publishing my work.

    These interactions showcased the wide-ranging backgrounds and motivations within the open source community and have deepened my respect for the open source community and its contributors. I have some homework to do with my resume and the links to opportunities that were shared with me.Open source welcomes contributors at all levels, offering a platform for innovation and collective achievement.

    Feel free to be an Outreachy intern on the upcoming cohorts to start your journey.

    Best of luck.

    • wifi_tethering open_in_new

      This post is public

      dorothykabarozi.wordpress.com /2024/02/21/conversations-in-open-source-insights-from-informal-chats-with-open-source-contributors/

    • chevron_right

      Carlos Garcia Campos: A Clarification About WebKit Switching to Skia

      news.movim.eu / PlanetGnome · 6 days ago - 18:11

    In the previous post I talked about the plans of the WebKit ports currently using Cairo to switch to Skia for 2D rendering. Apple ports don’t use Cairo, so they won’t be switching to Skia. I understand the post title was confusing, I’m sorry about that. The original post has been updated for clarity.

    • wifi_tethering open_in_new

      This post is public

      blogs.igalia.com /carlosgc/2024/02/20/a-clarification-about-webkit-switching-to-skia/

    • chevron_right

      Matthew Garrett: Debugging an odd inability to stream video

      news.movim.eu / PlanetGnome · 7 days ago - 22:30 · 5 minutes

    We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

    This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

    This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

    And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think . tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

    What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

    All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

    (Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

    I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

    comment count unavailable comments
    • chevron_right

      Carlos Garcia Campos: WebKit Switching to Skia for 2D Graphics Rendering

      news.movim.eu / PlanetGnome · 7 days ago - 13:27 · 3 minutes

    In recent years we have had an ongoing effort to improve graphics performance of the WebKit GTK and WPE ports. As a result of this we shipped features like threaded rendering, the DMA-BUF renderer, or proper vertical retrace synchronization (VSync). While these improvements have helped keep WebKit competitive, and even perform better than other engines in some scenarios, it has been clear for a while that we were reaching the limits of what can be achieved with a CPU based 2D renderer.

    There was an attempt at making Cairo support GPU rendering, which did not work particularly well due to the library being designed around stateful operation based upon the PostScript model—resulting in a convenient and familiar API, great output quality, but hard to retarget and with some particularly slow corner cases. Meanwhile, other web engines have moved more work to the GPU, including 2D rendering, where many operations are considerably faster.

    We checked all the available 2D rendering libraries we could find, but none of them met all our requirements, so we decided to try writing our own library. At the beginning it worked really well, with impressive results in performance even compared to other GPU based alternatives. However, it proved challenging to find the right balance between performance and rendering quality, so we decided to try other alternatives before continuing with its development. Our next option had always been Skia. The main reason why we didn’t choose Skia from the beginning was that it didn’t provide a public library with API stability that distros can package and we can use like most of our dependencies. It still wasn’t what we wanted, but now we have more experience in WebKit maintaining third party dependencies inside the source tree like ANGLE and libwebrtc, so it was no longer a blocker either.

    In December 2023 we made the decision of giving Skia a try internally and see if it would be worth the effort of maintaining the project as a third party module inside WebKit. In just one month we had implemented enough features to be able to run all MotionMark tests. The results in the desktop were quite impressive, getting double the score of MotionMark global result. We still had to do more tests in embedded devices which are the actual target of WPE, but it was clear that, at least in the desktop, with this very initial implementation that was not even optimized (we kept our current architecture that is optimized for CPU rendering) we got much better results. We decided that Skia was the option, so we continued working on it and doing more tests in embedded devices. In the boards that we tried we also got better results than CPU rendering, but the difference was not so big, which means that with less powerful GPUs and with our current architecture designed for CPU rendering we were not that far from CPU rendering. That’s the reason why we managed to keep WPE competitive in embeeded devices, but Skia will not only bring performance improvements, it will also simplify the code and will allow us to implement new features . So, we had enough data already to make the final decision of going with Skia.

    In February 2024 we reached a point in which our Skia internal branch was in an “upstreamable” state, so there was no reason to continue working privately. We met with several teams from Google, Sony, Apple and Red Hat to discuss with them about our intention to switch from Cairo to Skia, upstreaming what we had as soon as possible. We got really positive feedback from all of them, so we sent an email to the WebKit developers mailing list to make it public. And again we only got positive feedback, so we started to prepare the patches to import Skia into WebKit, add the CMake integration and the initial Skia implementation for the WPE port that already landed in main.

    We will continue working on the Skia implementation in upstream WebKit, and we also have plans to change our architecture to better support the GPU rendering case in a more efficient way. We don’t have a deadline, it will be ready when we have implemented everything currently supported by Cairo, we don’t plan to switch with regressions. We are focused on the WPE port for now, but at some point we will start working on GTK too and other ports using cairo will eventually start getting Skia support as well.

    • wifi_tethering open_in_new

      This post is public

      blogs.igalia.com /carlosgc/2024/02/19/webkit-switching-to-skia-for-2d-graphics-rendering/

    • chevron_right

      Federico Mena-Quintero: Rustifying libipuz: character sets

      news.movim.eu / PlanetGnome · Sunday, 18 February - 04:21 · 10 minutes

    It has been, what, like four years since librsvg got fully rustified, and now it is time to move another piece of critical infrastructure to a memory-safe language.

    I'm talking about libipuz , the GObject-based C library that GNOME Crosswords uses underneath. This is a library that parses the ipuz file format and is able to represent various kinds of puzzles.

    The words "GNOME CROSSWORDS" set inside a crossword puzzle

    Libipuz is an interesting beast. The ipuz format is JSON with a lot of hair: it needs to represent the actual grid of characters and their solutions, the grid's cells' numbers, the puzzle's clues, and all the styling information that crossword puzzles can have (it's more than you think!).

    {
        "version": "http://ipuz.org/v2",
        "kind": [ "http://ipuz.org/crossword#1", "https://libipuz.org/barred#1" ],
        "title": "Mephisto No 3228",
        "styles": {
            "L": {"barred": "L" },
            "T": {"barred": "T" },
            "TL": {"barred": "TL" }
        },
        "puzzle":   [ [  1,  2,  0,  3,  4,  {"cell": 5, "style": "L"},  6,  0,  7,  8,  0,  9 ],
                      [  0,  {"cell": 0, "style": "L"}, {"cell": 10, "style": "TL"},  0,  0,  0,  0,  {"cell": 0, "style": "T"},  0,  0,  {"cell": 0, "style": "T"},  0 ]
                    # the rest is omitted
        ],
        "clues": {
            "Across": [ {"number":1, "clue":"Having kittens means losing heart for home day", "enumeration":"5", "cells":[[0,0],[1,0],[2,0],[3,0],[4,0]] },
                        {"number":5, "clue":"Mostly allegorical poet on writing companion poem, say", "enumeration":"7", "cells":[[5,0],[6,0],[7,0],[8,0],[9,0],[10,0],[11,0]] },
                    ]
            # the rest is omitted
        }
    }
    

    Libipuz uses json-glib , which works fine to ingest the JSON into memory, but then it is a complete slog to distill the JSON nodes into C data structures. You need iterate through each node in the JSON tree and try to fit its data into yours.

    Get me the next node. Is the node an array? Yes? How many elements? Allocate my own array. Iterate the node's array. What's in this element? Is it a number? Copy the number to my array. Or is it a string? Do I support that, or do I throw an error? Oh, don't forget the code to meticulously free the partially-constructed thing I was building.

    This is not pleasant code to write and test.

    Ipuz also has a few mini-languages within the format, which live inside string properties. Parsing these in C unpleasant at best.

    Differences from librsvg

    While librsvg has a very small GObject-based API, and a medium-sized library underneath, libipuz has a large API composed of GObjects, boxed types, and opaque and public structures. Using libipuz involves doing a lot of calls to its functions, from loading a crossword to accessing each of its properties via different functions.

    I want to use this rustification as an exercise in porting a moderately large C API to Rust. Fortunately, libipuz does have a good test suite that is useful from the beginning of the port.

    Also, I want to see what sorts of idioms appear when exposing things from Rust that are not GObjects. Mutable, opaque structs can just be passed as a pointer to a heap allocation, i.e. a Box<T> . I want to take the opportunity to make more things in libipuz immutable; currently it has a bunch of reference-counted, mutable objects, which are fine in single-threaded C, but decidedly not what Rust would prefer. For librsvg it was very beneficial to be able to notice parts of objects that remain immutable after construction, and to distinguish those parts from the mutable ones that change when the object goes through its lifetime.

    Let's begin!

    In the ipuz format, crosswords have a character set or charset: it is the set of letters that appear in the puzzle's solution. Internally, GNOME Crosswords uses the charset as a histogram of letter counts for a particular puzzle. This is useful information for crossword authors.

    Crosswords uses the histogram of letter counts in various important algorithms, for example, the one that builds a database of words usable in the crosswords editor. That database has a clever format which allows answering questions like the following quickly: What words in the database match ?OR?? WORDS and CORES will match.

    IPuzCharset is one of the first pieces of code I worked on in Crosswords, and it later got moved to libipuz. Originally it didn't even keep a histogram of character counts; it was just an ordered set of characters that could answer the question, "what is the index of the character ch within the ordered set?".

    I implemented that ordered set with a GTree , a balanced binary tree. The keys in the key/value tree were the characters, and the values were just unused.

    Later, the ordered set was turned into an actual histogram with character counts: keys are still characters, but each value is now a count of the coresponding character.

    Over time, Crosswords started using IPuzCharset for different purposes. It is still used while building and accessing the database of words; but now it is also used to present statistics in the crosswords editor, and as part of the engine in an acrostics generator.

    In particular, the acrostics generator has been running into some performance problems with IPuzCharset . I wanted to take the port to Rust as an opportunity to change the algorithm and make it faster.

    Refactoring into mutable/immutable stages

    IPuzCharset started out with these basic operations:

    /* Construction; memory management */
    IPuzCharset          *ipuz_charset_new              (void);
    IPuzCharset          *ipuz_charset_ref              (IPuzCharset       *charet);
    void                  ipuz_charset_unref            (IPuzCharset       *charset);
    
    /* Mutation */
    void                  ipuz_charset_add_text         (IPuzCharset       *charset,
                                                         const char        *text);
    gboolean              ipuz_charset_remove_text      (IPuzCharset       *charset,
                                                         const char        *text);
    
    /* Querying */
    gint                  ipuz_charset_get_char_index   (const IPuzCharset *charset,
                                                         gunichar           c);
    guint                 ipuz_charset_get_char_count   (const IPuzCharset *charset,
                                                         gunichar           c);
    gsize                 ipuz_charset_get_n_chars      (const IPuzCharset *charset);
    gsize                 ipuz_charset_get_size         (const IPuzCharset *charset);
    

    All of those are implemented in terms of the key/value binary tree that stores a character in each node's key, and a count in the node's value.

    I read the code in Crosswords that uses the ipuz_charset_*() functions and noticed that in every case, the code first constructs and populates the charset using ipuz_charset_add_text() , and then doesn't modify it anymore — it only does queries afterwards. The only place that uses ipuz_charset_remove_text() is the acrostics generator, but that one doesn't do any queries later: it uses the remove_text() operation as part of another algorithm, but only that.

    So, I thought of doing this:

    • Split things into a mutable IPuzCharsetBuilder that has the add_text / remove_text operations, and also has a build() operation that consumes the builder and produces an immutable IPuzCharset .

    • IPuzCharset is immutable; it can only be queried.

    • IPuzCharsetBuilder can work with a hash table, which turns the "add a character" operation from O(log n) to O(1) amortized.

    • build() is O(n) on the number of unique characters and is only done once per charset.

    • Make IPuzCharset work with a different hash table that also allows for O(1) operations.

    Basics of IPuzCharsetBuilder

    IPuzCharsetBuilder is mutable, and it can live on the Rust side as a Box<T> so it can present an opaque pointer to C.

    #[derive(Default)]
    pub struct CharsetBuilder {
        histogram: HashMap<char, u32>,
    }
    
    // IPuzCharsetBuilder *ipuz_charset_builder_new (void); */
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_builder_new() -> Box<CharsetBuilder> {
        Box::new(CharsetBuilder::default())
    }
    

    For extern "C" , Box<T> marshals as a pointer. It's nominally what one would get from malloc() .

    Then, simple functions to create the character counts:

    impl CharsetBuilder {
        /// Adds `text`'s character counts to the histogram.
        fn add_text(&mut self, text: &str) {
            for ch in text.chars() {
                self.add_character(ch);
            }
        }
    
        /// Adds a single character to the histogram.
        fn add_character(&mut self, ch: char) {
            self.histogram
                .entry(ch)
                .and_modify(|e| *e += 1)
                .or_insert(1);
        }
    }
    

    The C API wrappers:

    use std::ffi::CStr;
    
    // void ipuz_charset_builder_add_text (IPuzCharsetBuilder *builder, const char *text);
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_builder_add_text(
        builder: &mut CharsetBuilder,
        text: *const c_char,
    ) {
        let text = CStr::from_ptr(text).to_str().unwrap();
        builder.add_text(text);
    }
    

    CStr is our old friend that takes a char * and can wrap it as a Rust &str after validating it for UTF-8 and finding its length. Here, the unwrap() will panic if the passed string is not UTF-8, but that's what we want; it's the equivalent of an assertion that what was passed in is indeed UTF-8.

    // void ipuz_charset_builder_add_character (IPuzCharsetBuilder *builder, gunichar ch);
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_builder_add_character(builder: &mut CharsetBuilder, ch: u32) {
        let ch = char::from_u32(ch).unwrap();
        builder.add_character(ch);
    }
    

    Somehow, the glib-sys crate doesn't have gunichar , which is just a guint32 for a Unicode code point. So, we take in a u32 , and check that it is in the appropriate range for Unicode code points with char::from_u32() . Again, a panic in the unwrap() means that the passed number is out of range; equivalent to an assertion.

    Converting to an immutable IPuzCharset

    pub struct Charset {
        /// Histogram of characters and their counts plus derived values.
        histogram: HashMap<char, CharsetEntry>,
    
        /// All the characters in the histogram, but in order.
        ordered: String,
    
        /// Sum of all the counts of all the characters.
        sum_of_counts: usize,
    }
    
    /// Data about a character in a `Charset`.  The "value" in a key/value pair where the "key" is a character.
    #[derive(PartialEq)]
    struct CharsetEntry {
        /// Index of the character within the `Charset`'s ordered version.
        index: u32,
    
        /// How many of this character in the histogram.
        count: u32,
    }
    
    impl CharsetBuilder {
        fn build(self) -> Charset {
            // omitted for brevity; consumes `self` and produces a `Charset` by adding
            // the counts for the `sum_of_counts` field, and figuring out the sort
            // order into the `ordered` field.
        }
    }
    

    Now, on the C side, IPuzCharset is meant to also be immutable and reference-counted. We'll use Arc<T> for such structures. One cannot return an Arc<T> to C code; it must first be converted to a pointer with Arc::into_raw() :

    // IPuzCharset *ipuz_charset_builder_build (IPuzCharsetBuilder *builder);
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_builder_build(
        builder: *mut CharsetBuilder,
    ) -> *const Charset {
        let builder = Box::from_raw(builder); // get back the Box from a pointer
        let charset = builder.build();        // consume the builder and free it
        Arc::into_raw(Arc::new(charset))      // Wrap the charset in Arc and get a pointer
    }
    

    Then, implement ref() and unref() :

    // IPuzCharset *ipuz_charset_ref (IPuzCharset *charet);
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_ref(charset: *const Charset) -> *const Charset {
        Arc::increment_strong_count(charset);
        charset
    }
    
    // void ipuz_charset_unref (IPuzCharset *charset);
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_unref(charset: *const Charset) {
        Arc::decrement_strong_count(charset);
    }
    

    The query functions need to take a pointer to what really is the Arc<Charset> on the Rust side. They reconstruct the Arc with Arc::from_raw() and wrap it in ManuallyDrop so that the Arc doesn't lose a reference count when the function exits:

    // gsize ipuz_charset_get_n_chars (const IPuzCharset *charset);
    #[no_mangle]
    pub unsafe extern "C" fn ipuz_charset_get_n_chars(charset: *const Charset) -> usize {
        let charset = ManuallyDrop::new(Arc::from_raw(charset));
        charset.get_n_chars()
    }
    

    Tests

    The C tests remain intact; these let us test all the #[no_mangle] wrappers.

    The Rust tests can just be for the internals, simliar to this:

        #[test]
        fn supports_histogram() {
            let mut builder = CharsetBuilder::default();
    
            let the_string = "ABBCCCDDDDEEEEEFFFFFFGGGGGGG";
            builder.add_text(the_string);
            let charset = builder.build();
    
            assert_eq!(charset.get_size(), the_string.len());
    
            assert_eq!(charset.get_char_count('A').unwrap(), 1);
            assert_eq!(charset.get_char_count('B').unwrap(), 2);
            assert_eq!(charset.get_char_count('C').unwrap(), 3);
            assert_eq!(charset.get_char_count('D').unwrap(), 4);
            assert_eq!(charset.get_char_count('E').unwrap(), 5);
            assert_eq!(charset.get_char_count('F').unwrap(), 6);
            assert_eq!(charset.get_char_count('G').unwrap(), 7);
    
            assert!(charset.get_char_count('H').is_none());
        }
    

    Integration with the build system

    Libipuz uses meson , which is not particularly fond of cargo . Still, cargo can be used from meson with a wrapper script and a bit of easy hacks. See the merge request for details.

    Further work

    I've left the original C header file ipuz-charset.h intact, but ideally I'd like to automatically generate the headers from Rust with cbindgen . Doing it that way lets me check that my assumptions of the extern "C" ABI are correct ("does foo: &mut Foo appear as Foo *foo on the C side?"), and it's one fewer C-ism to write by hand. I need to see what to do about inline documentation; gi-docgen can consume C header files just fine, but I'm not yet sure about how to make it work with generated headers from cbindgen .

    I still need to modify the CI's code coverage scripts to work with the mixed C/Rust codebase. Fortunately I can copy those incantations from librsvg.

    Is it faster?

    Maybe! I haven't benchmarked the acrostics generator yet. Stay tuned!

    • wifi_tethering open_in_new

      This post is public

      viruta.org /rustifying-libipuz-charset.html

    • chevron_right

      Juan Pablo Ugarte: Cambalache Gtk4 port goes beta!

      news.movim.eu / PlanetGnome · Friday, 16 February - 14:32

    Hi, I am happy to announce Cambalache’s Gtk4 port has a beta release!

    Version 0.17.2 features minors improvements and a brand new UI ported to Gtk 4!

    Editing Cambalache UI in Cambalache

    The port was easier than expected, still lots of changes as you can see here…

    64 files changed, 2615 insertions(+), 2769 deletions(-)

    I specially like the new GtkDialog API and the removal of gtk_dialog_run()
    With so many changes I expect some new bugs so please if you find any file them here .

    Where to get it?

    You can get it from Flathub Beta

    flatpak remote-add --if-not-exists flathub-beta flathub.org/beta-repo/flathub-
    
    flatpak install flathub-beta ar.xjuan.Cambalache

    or checkout main branch at gitlab

    git clone https://gitlab.gnome.org/jpu/cambalache.git

    Matrix channel

    Have any question? come chat with us at #cambalache:gnome.org

    Mastodon

    Follow me in Mastodon @xjuan to get news related to Cambalache development.

    Happy coding!

    • wifi_tethering open_in_new

      This post is public

      blogs.gnome.org /xjuan/2024/02/16/cambalache-gtk4-port-goes-beta/

    • chevron_right

      Sam Thursfield: Status update, 16/02/2024

      news.movim.eu / PlanetGnome · Friday, 16 February - 14:16 · 4 minutes

    Some months you work very hard and there is little to show for any of it… so far this is one of those very draining months. I’m looking forward to spring, and spending less time online and at work.

    Rather than ranting I want to share a couple of things from elsewhere in the software world to show what we should be aiming towards in the world of GNOME and free operating systems.

    Firstly, from “Is something bugging you?” :

    Before we even started writing the database, we first wrote a fully-deterministic event-based network simulation that our database could plug into. … if one particular simulation run found a bug in our application logic, we could run it over and over again with the same random seed, and the exact same series of events would happen in the exact same order. That meant that even for the weirdest and rarest bugs, we got infinity “tries” at figuring it out, and could add logging, or do whatever else we needed to do to track it down.

    The text is hyperbolic, and for some reason they think it’s ok to work with Palantir, but its an inspiring read.

    Secondly, a 15 minute video which you should watch in its entirety, and then consider how the game development world got so far ahead of everyone else.

    This is the world that’s possible for operating systems, if we focus on developer experience and integration testing across the whole OS. And GNOME OS + openQA is where I see some of the most promising work happening right now.

    Of course we’re a looooong way from this promised land, despite the progress we’ve made in the last 10+ years. Automated testing of the OS is great, but we don’t always have logs ( bug ), the image randomly fails to start ( bug ), the image takes hours to build , we can’t test merge requests ( bug ), testing takes 15+ minutes to run, etc. Some of these issues seem intractable when occasional volunteer effort is all we have.

    Imagine a world where you can develop and live-deploy changes to your phone and your laptop OS, exhaustively test them in CI, step backwards with a debugger when problems arise – this is what we should be building in the tech industry. A few teams are chipping away at this vision – in the Linux world, GNOME Builder and the Fedora Atomic project spring to mind and I’m sure there are more .

    Anyway, what happened last month?

    Outreachy

    This is the final month of the Outreachy internship that I’m running around end-to-end testing of GNOME. We already had some wins, there are now 5 separate testsuites running against GNOME OS, unfortunately rather useless at present due to random startup failures .

    I spent a *lot* of time working with Tanju on a way to usefully test GNOME on Mobile. I haven’t been able to follow this effort closely beyond seeing a few demos and old blogs . This week was something of a crash course on what there is. Along the way I got pretty confused about scaling in GNOME Shell – it turns out there’s currently a hardcoded minimum screen size, and upstream Mutter will refuse to scale the display below a certain size. In fact upstream GNOME Shell doesn’t have any of the necessary adaptations for use in a mobile form factor. We really need a “GNOME OS Mobile” VM image – here’s an open issue – it’s unlikely to be done within the last 2 weeks of the current internship, though. The best we can do for now is test the apps on a regular desktop screen, but with the window resized to 360×720 pixels.

    On the positive side, hopefully this has been a useful journey for Tanju and Dorothy into the inner workings of GNOME. On a separate note, we submitted a workshop on openQA testing to GUADEC in Denver, and if all goes well with travel sponsorship and US VISA applications, we hope to actually meet in person there in July.

    FOSDEM

    I went to FOSDEM 2024 and had a great time, it was one of my favourite FOSDEM trips. I managed to avoid the ‘flu – i think wearing a mask on the plane is the secret. From Codethink we were 41 people this year – probably a new record.

    I went a day early to join in the end of the GTK hackfest , and did a little work on the Tiny SPARQL database, formerly known as Tracker SPARQL. Together with Carlos we fixed breakage in the CLI , improved HTTP support and prototyped a potential internship project to add a web based query editor .

    My main goal at FOSDEM was to make contact with other openQA users and developers, and we had some success there. Since then i’ve hashed out a wishlist for openQA for GNOME’s use cases, and we’re aiming to set up an open, monthly call where different QA teams can get together and collaborate on a realistic roadmap.

    I saw some great talks too, the “Outreachy 1000 Interns” talk and the Fairphone “Sustainable and Longlasting Phones” were particular highlights. I went to the Bytenight event for the first time and and found an incredible 1970’s Wurlitzer transistor organ in the “smoking area” of the HSBXL hackspace, also beating Javi, Bart and Adam at Super Tuxcart several times.