-
chevron_right
Slidge v0.1.0-rc1 is out!
Nicoco · Sunday, 8 January - 14:51 edit
Now certified 100% bug-free*. I made a little blog post about it.
*is a NOT joke
Nicoco’s feed
Blog
Slidge v0.1.0-rc1 is out!
Nicoco · Sunday, 8 January - 14:51 edit
Now certified 100% bug-free*. I made a little blog post about it.
*is a NOT joke
Slidge Beta2
Nicoco · Sunday, 30 October, 2022 - 00:00 · 5 minutes
It has almost been two months since I announced slidge beta0. A lot has been fixed and improved during these two months, thanks to a few early adopters that I will never thank enough for their patience and time. All in all, it felt right to tag a new beta now.
I have never hidden it, I believe that emoji reactions to messages,
as stupid as they may look to bearded TUI lovers,
are an awesome instant messaging feature.
They are a convenient way to acknowledge that you have a read a message
and to signal whether you approved its content or not.
The
chat markers XEP
proposed an acknowledgement mechanism, but as far as
I know, no XMPP client ever implemented the
<acknowledged>
marker.
Arguably, reaction emojis offer a richer experience than a single "ack" button anyway.
Slidge implements XEP-0444 and most plugins use this core feature of slidge. The specification is very permissive: a single chatter can react with several emojis, and can use any emoji to do so. A lot of legacy networks are much more restrictive: on signal for instance, you can only react with a single emoji; on telegram, you can only use a subset of emojis, etc.
To reflect this limitation on the XMPP end, slidge takes advantage of the privileged entity XEP , to impersonate the XMPP and adjust their reactions so that they are in sync on both ends.
Movim was the first XMPP client to implement XEP-0444, but when slidge beta0 came out, Movim didn't implement it in a way that made it possible to reflect legacy networks limitations. But edhelas was very reactive (hoho) about it, and it's now fixed, as you can see in the short clip below.
Although not released yet, the dino team merged reaction emojis to the master branch, and I gave it a try. This lead to a few bug reports to the dino tracker, which in turn triggered some fixes in slidge to follow the specification more rigourously.
jcbrand, maintainer of the conversejs client , also dropped by the slidge MUC to say that seeing the reactions implemented in slidge motivated him to have them in conversejs too. A very pleasant surprise!
Various feature requests are opened on other client trackers , let's hope this gains traction somehow.
I don't think I will add new features for the 0.1.0 release, except maybe, maybe some history backfilling mechanism but that is still undecided. What would be really great is having more people testing out all the different plugins. As of now, I believe only the telegram and signal plugins have been tested by others than me. Testers, please give all slidge plugins a try and share your feedback!
I had MUCs sort of working on the previous slidge iteration, but decided to postpone their support in order to focus on having direct messages working properly with a maintainable codebase. Since MIX seems to gain some traction, it may also be supported in 0.2.0. I have yet to dive into both specs a little more before deciding what is more appropriate to implement.
Here are some ideas that might be more than ideas someday.
Since legacy networks are likely to change their APIs, I think it would be nice to have different release cycles for slidge core and plugins. For the same reason it may also be useful to have some sort of plugin installer/updater builtin within slidge.
Slidge may enter the official debian repos someday ...
Amazing news: Alex started working on a slidge-whatsapp plugin. I am thrilled that someone judged slidge's code base sane enough to decide to build something on top of it!
The best library available to interact with the whatsapp network is written in Go, so most work that has been done until now was about determining how to make it possible to use it from python. Hopefully, the work put into this will help writing plugins for other networks where only go bindings are available, such as threema .
Personal eventing protocol is the modern XMPP way to advertise avatars and nicknames, slidge now supports it instead of the old vcard-temp way to set contact avatars.
This also allowed to add support for vcards over XMPP , which I think are mostly useful for phone number-based networks. Only Movim and Gajim support this XEP for now, unfortunately.
Thanks to IGImonster and contributions from azerttyu, slidge now offers debian packages for amd64 and arm64, and even a repository so you can "apt update" your way to slidge.
The containers on dockerhub now support the ARM64 architecture (previously only amd64).
Thanks to the feedback of ejabberd users, various changes have been made to ensure that slidge works with ejabberd:
That's all, folks! Grab slidge 0.1.0-beta2 from sourcehut , pypi , deb.slidge.im , or dockerhub .
Slidge Beta2
Nicoco · Sunday, 30 October, 2022 - 00:00 · 5 minutes
It has almost been two months since I announced slidge beta0. A lot has been fixed and improved during these two months, thanks to a few early adopters that I will never thank enough for their patience and time. All in all, it felt right to tag a new beta now.
I have never hidden it, I believe that emoji reactions to messages,
as stupid as they may look to bearded TUI lovers,
are an awesome instant messaging feature.
They are a convenient way to acknowledge that you have a read a message
and to signal whether you approved its content or not.
The
chat markers XEP
proposed an acknowledgement mechanism, but as far as
I know, no XMPP client ever implemented the
<acknowledged>
marker.
Arguably, reaction emojis offer a richer experience than a single "ack" button anyway.
Slidge implements XEP-0444 and most plugins use this core feature of slidge. The specification is very permissive: a single chatter can react with several emojis, and can use any emoji to do so. A lot of legacy networks are much more restrictive: on signal for instance, you can only react with a single emoji; on telegram, you can only use a subset of emojis, etc.
To reflect this limitation on the XMPP end, slidge takes advantage of the privileged entity XEP , to impersonate the XMPP and adjust their reactions so that they are in sync on both ends.
Movim was the first XMPP client to implement XEP-0444, but when slidge beta0 came out, Movim didn't implement it in a way that made it possible to reflect legacy networks limitations. But edhelas was very reactive (hoho) about it, and it's now fixed, as you can see in the short clip below.
Although not released yet, the dino team merged reaction emojis to the master branch, and I gave it a try. This lead to a few bug reports to the dino tracker, which in turn triggered some fixes in slidge to follow the specification more rigourously.
jcbrand, maintainer of the conversejs client , also dropped by the slidge MUC to say that seeing the reactions implemented in slidge motivated him to have them in conversejs too. A very pleasant surprise!
Various feature requests are opened on other client trackers , let's hope this gains traction somehow.
I don't think I will add new features for the 0.1.0 release, except maybe, maybe some history backfilling mechanism but that is still undecided. What would be really great is having more people testing out all the different plugins. As of now, I believe only the telegram and signal plugins have been tested by others than me. Testers, please give all slidge plugins a try and share your feedback!
I had MUCs sort of working on the previous slidge iteration, but decided to postpone their support in order to focus on having direct messages working properly with a maintainable codebase. Since MIX seems to gain some traction, it may also be supported in 0.2.0. I have yet to dive into both specs a little more before deciding what is more appropriate to implement.
Here are some ideas that might be more than ideas someday.
Since legacy networks are likely to change their APIs, I think it would be nice to have different release cycles for slidge core and plugins. For the same reason it may also be useful to have some sort of plugin installer/updater builtin within slidge.
Slidge may enter the official debian repos someday ...
Amazing news: Alex started working on a slidge-whatsapp plugin. I am thrilled that someone judged slidge's code base sane enough to decide to build something on top of it!
The best library available to interact with the whatsapp network is written in Go, so most work that has been done until now was about determining how to make it possible to use it from python. Hopefully, the work put into this will help writing plugins for other networks where only go bindings are available, such as threema .
Personal eventing protocol is the modern XMPP way to advertise avatars and nicknames, slidge now supports it instead of the old vcard-temp way to set contact avatars.
This also allowed to add support for vcards over XMPP , which I think are mostly useful for phone number-based networks. Only Movim and Gajim support this XEP for now, unfortunately.
Thanks to IGImonster and contributions from azerttyu, slidge now offers debian packages for amd64 and arm64, and even a repository so you can "apt update" your way to slidge.
The containers on dockerhub now support the ARM64 architecture (previously only amd64).
Thanks to the feedback of ejabberd users, various changes have been made to ensure that slidge works with ejabberd:
That's all, folks! Grab slidge 0.1.0-beta2 from sourcehut , pypi , deb.slidge.im , or dockerhub .
Slidge First Beta
Nicoco · Sunday, 4 September, 2022 - 00:00 · 3 minutes
After one year and a half of development, including a very long pause, a few rewrites, and thanks to my summer vacations, I am proud to announce the release of slidge version 0.1.0-beta0 .
Slidge lets you use your usual XMPP client to communicate with your buddies on other " legacy " instant messaging networks, much like spectrum2 , but without libpurple, and targeted at XMPP exclusively. It is an XMPP server component , acting as an alternative client using your legacy credentials. Your legacy contacts are assigned a puppet JID (someusername@legacynetwork.example.com) that you can use to exchange messages with them.
Slidge in itself is just a library, and legacy networks clients are actually slidge's plugins . Slidge includes plugins for 7 different instant messaging (IM) services .
After discovering IM with mIRC, ICQ and then MSN messenger in the late 90s/early 2000s, I was amazed when I discovered pigdin and was a happy user for many years (oh psychic mode, how you made me look a like a magician to non-techies). But a few years ago, when I finally jumped into the smartphone bandwagon, it was a bit frustrating not to have the same "single chat app" experience across multiple devices.
XMPP gateways are a nice way to achieve this, but except for the excellent biboumi IRC gateway, I have always been frustrated by the shortcomings of the available implementations. Spectrum2 is the leading project in this area, but has been in maintenance mode for a while and does not plan to integrate new cool features of the now experimental XEPs, such as message reactions which I hope many more XMPP clients will implement in the future (only Movim implements it AFAIK; it's great BTW).
Yes it does, with the limitation of not providing any group-related feature: only direct messages are supported . If everything goes according to the plan, groups will be part of the 0.2.0 release, some day™.
I use it daily, and it works just fine for me (which was and still is slidge's main goal). It has not been extensively battle-tested, and I would be happy to have feedback of any kind, including bug reports and/or criticism of the implementation/technical choice/code style/whatever. Sucking less at writing code is a big reason for me to work on this project.
Slidge is written in async python, using the sleek slixmpp library. It attempts to make good use of mypy for static type checking, and includes (too few) tests using the pytest framework.
The general idea is to make it trivial to write plugins through the plugin API , abstracting away the XMPPalities and exposing simple methods. Hopefully, thanks to the rich python ecosystem, plugins should be a thin layer between the plugin API and some external library handling the legacy network specifics.
Slidge is a server software, and screenshots of log outputs might not be very interesting to look at. Instead, here are side-by-side screenshots of the XMPP client Movim (left) and the official signal desktop app (right) side by side.
movim.png)
Neat, ain't it?
(Since XMPP is an open standard, Slidge will also work with any XMPP client , and I don't think there is a single OS that does not have at least one XMPP client.)
Right now, the recommended way to try it out is to go the container route ; this is what I use with my personal XMPP server. A pypi package is also available if you prefer.
Cloning the git repo and trying it locally is also a very easy option to test slidge using docker-compose , which spins up a local XMPP server and a browser-based XMPP client .
A nice thing about cloning the repo is that it makes you about halfway to writing the patch fixing the bug that annoys you or implementing some amazing new feature. ;)
Xmpp Bot Stable Diffusion
Nicoco · Wednesday, 31 August, 2022 - 00:00 · 6 minutes
Today is my last day of vacation for this summer.
I had more or less important stuff planned, but yesterday I learnt about stable diffusion , "a machine-learning model to generate digital images from natural language descriptions", which happens to be open source, including its trained weights . It's also pretty unfiltered, making it possible to generate so-called "deep fakes", erotica and/or using trademarked things, for extra fun. Being the nerd that I am, I couldn't help but trying it out today and ended up doing that in a couple hours:
What do we see on this image? A screenshot of Dino showing an XMPP group chat, with a bot named t1000, and my dad (robi) prompting the bot to generate an image (well, 2 actually).
Since this type of bot is quite fun to play with, I thought it would be nice to share how I did this. Disclaimer: this is a very quick'n'dirty method to get this bot running, it's (very, very) far from being as advanced as this "similar" (hum) discord bot .
Before even thinking about a bot, I needed to try to generate some images "the hard way". It turned out to be pretty simple and I expected this part to be a lot more painful and time-consuming.
I followed the instructions from the official github repo , which included downloading the trained weights from hugging face .
Unfortunately, it turned out that a RTX 3080 Ti with its 12GB of VRAM was not enough for this base version, but a quick look at this github issue made me land on this fork which happens to have scripts for GPUs with only 4GB of VRAM. There are probably a lot of other alternatives to generate images on limited hardware, but this one worked for me, so I did not look further.
After copying the
./optimizedSD
folder in the stable diffusion original source, I could run inferences like that:
python optimizedSD/optimized_txt2img.py \ --prompt "Luffy with a guitar" \ --H 512 --W 512 \ --seed 27 \ --n_iter 2 \ --n_samples 10 \ --ddim_steps 50
which filled a folder with these images in about 3 minutes:
There are about a million things that are wrong with this implementation, but hey, it let my non-techie friends play with this crazy shit, so my goal was met.
import asyncio import logging import random import tempfile import time import re from argparse import ArgumentParser from pathlib import Path import aiohttp import slixmpp from slixmpp import JID # This is where you git cloned stable diffusion CWD = Path("/home/nicoco/tmp/stable-diffusion/") # This is the command you used to generate images. # Here we only generate 4 images by prompt. CMD_TXT2IMG = [ "/home/nicoco/.local/miniconda3/envs/ldm/bin/python", "/home/nicoco/tmp/stable-diffusion/optimizedSD/optimized_txt2img.py", "--H", # height and weights must be multiples of 64 "512", # more pixels = more VRAM needed "--W", "512", "--n_iter", # n_output_img = n_iter x n_samples "1", # n_samples = faster, but more VRAM "--n_samples", "4", "--ddim_steps", "50", "--turbo", # remove these "--unet_bs", # 3 last lines "4", # for lower VRAM usage ] CMD_IMG2IMG = [ "/home/nicoco/.local/miniconda3/envs/ldm/bin/python", "/home/nicoco/tmp/stable-diffusion/optimizedSD/optimized_img2img.py", "--H", # height and weights must be multiples of 64 "512", # more pixels = more VRAM needed "--W", "512", "--n_iter", # n_output_img = n_iter x n_samples "1", # n_samples = faster, but more VRAM "--n_samples", "4", "--ddim_steps", "50", "--turbo", # remove these "--unet_bs", # 3 last lines "4", # for lower VRAM usage ] async def worker(bot: "MUCBot"): # This will process requests one after the other, because # chances are you can only run one at a time with a # consumer-grade GPU. # Adapted from the python docs: https://docs.python.org/3/library/asyncio-queue.html q = bot.queue while True: msg = await q.get() start = time.time() # this should be replaced with a proper regex… prompt = re.sub(f"^{bot.nick}(.)", "", msg["body"]).strip() # let's call an external process that spawns # another python interpreter. quick and dirty, remember? try: url = bot.images_waiting_for_prompts.pop(msg["from"]) except KeyError: proc = await asyncio.create_subprocess_exec( *CMD_TXT2IMG, "--prompt", prompt, "--seed", str(random.randint(0, 1_000_000_000)), # random=fun stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, cwd=CWD, ) stdout, stderr = await proc.communicate() else: with tempfile.NamedTemporaryFile() as f: async with aiohttp.ClientSession() as session: async with session.get(url) as r: f.write(await r.read()) proc = await asyncio.create_subprocess_exec( *CMD_IMG2IMG, "--prompt", prompt, "--seed", str(random.randint(0, 1_000_000_000)), # random=fun "--init-img", f.name, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, cwd=CWD, ) stdout, stderr = await proc.communicate() print(stdout.decode()) # print, the best debugger ever™ print(stderr.decode()) # This retrieves the directory where the images were written # from the process's stdout. Yes, it is very ugly. output_dir = CWD / stdout.decode().split("\n")[-3].split()[-1] print(output_dir) q.task_done() bot.send_message( mto=msg["from"].bare, mbody=f"Result for: '{prompt}' (took {round(time.time() - start)} seconds)", mtype="groupchat", ) for f in output_dir.glob("*"): if f.stat().st_mtime < start: continue # only upload latest generated images url = await bot["xep_0363"].upload_file(filename=f) reply = bot.make_message( mto=msg["from"].bare, mtype="groupchat", ) # this lines are required to make the Conversations # Android XMPP client actually display the image in # the app, and not just a link reply["oob"]["url"] = url reply["body"] = url reply.send() # This part is basically just the slixmpp example MUCbot with some # minor changes class MUCBot(slixmpp.ClientXMPP): def __init__(self, jid, password, rooms: list[str], nick): slixmpp.ClientXMPP.__init__(self, jid, password) self.queue = asyncio.Queue() self.rooms = rooms self.nick = nick self.add_event_handler("session_start", self.start) self.add_event_handler("groupchat_message", self.muc_message) self.images_waiting_for_prompts = {} async def start(self, _event): await self.get_roster() self.send_presence() for r in self.rooms: await self.plugin["xep_0045"].join_muc(JID(r), self.nick) asyncio.create_task(worker(self)) async def muc_message(self, msg): if msg["mucnick"] != self.nick: if msg["body"].lower().startswith(self.nick): await self.queue.put(msg) self.send_message( mto=msg["from"].bare, mbody=f"Roger that: '{msg['body']}' (queue: {self.queue.qsize()})", mtype="groupchat", ) elif url := msg["oob"]["url"]: self.send_message( mto=msg["from"].bare, mbody=f"OK, what should I do with this image?", mtype="groupchat", ) self.images_waiting_for_prompts[msg["from"]] = url if __name__ == "__main__": parser = ArgumentParser() parser.add_argument( "-q", "--quiet", help="set logging to ERROR", action="store_const", dest="loglevel", const=logging.ERROR, default=logging.INFO, ) parser.add_argument( "-d", "--debug", help="set logging to DEBUG", action="store_const", dest="loglevel", const=logging.DEBUG, default=logging.INFO, ) parser.add_argument("-j", "--jid", dest="jid", help="JID to use") parser.add_argument("-p", "--password", dest="password", help="password to use") parser.add_argument( "-r", "--rooms", dest="rooms", help="MUC rooms to join", nargs="*" ) parser.add_argument("-n", "--nick", dest="nick", help="MUC nickname") args = parser.parse_args() logging.basicConfig(level=args.loglevel, format="%(levelname)-8s %(message)s") xmpp = MUCBot(args.jid, args.password, args.rooms, args.nick) xmpp.register_plugin("xep_0030") # Service Discovery xmpp.register_plugin("xep_0045") # Multi-User Chat xmpp.register_plugin("xep_0199") # XMPP Ping xmpp.register_plugin("xep_0363") # HTTP upload xmpp.register_plugin("xep_0066") # Out of band data xmpp.connect() xmpp.process()
After installing slixmpp and aiohttp using your favorite python environment isolation tool, You can then launch it with:
python bot.py \ -j bot@example.com \ # XMPP account of the bot -p XXX \ # password -r room1@conference.example.com room2@conference.example.com \ -n t1000 # nickname
The bot can join several rooms, so you can make one that you can show your mother, and another one for your degenerated friends.
None! I don't plan to make this a more sophisticated bot. But maybe I'll just make it generate a little more than 2 images at once, since generating 2 images takes about 1 minute and generating 20 images takes about 3 minutes, there is probably a better middle ground. Especially because so many of the output images are just crap!
But I should get back to paid work now.
EDIT (2022/09/01): added img2img early in the morning because it is even more fun, changed the number of output images to 4 and tuned a few parameters to take advantage of my beefy GPU. Now, no more!
Android Apps
Nicoco · Sunday, 17 July, 2022 - 00:00 · 5 minutes
If, like me, you still haven't switched to a "true" GNU/Linux phone such as the pinephone or the Librem 5 , chances are you are running android, i.e., Google/Linux.
To expand the phone lifetime and avoid feeding the data giants, I recommend using LineageOS , a custom ROM (ie an OS) without your phone manufacturer's and google's spying bloat.
If your phone is in this list , installation should be pretty straightforward. If you couldn't find your phone there, don't panic! You may find a version of LineageOS for your device anyway by searching through the XDA-dev forum . It feels a bit weird to download a binary from an online forum, but this seems to be OK in the Android world. Anyway, I did it for my Xiaomi Redmi Note 4, which is not officially supported by LineageOS anymore, and I am very happy with it.
Now, even if you don't install a custom ROM, I recommend using the following apps:
This is where the organic apps are.
There is absolutely no valid reason not to install F-Droid on your phone. It is a repository of open source applications, i.e., an alternative to the play store, where (most) apps are user-focused instead of personal-data-collecting focused. Just install this APK , and avoid using the google play store at all costs!
The app search engine included in fdroid is notoriously bad. So here's my curated list of recommended apps (and use your search engine to find other apps unless you know the name of the app you are looking for on F-Droid).
Main use of a phone, right? It used to be, at least.
The following apps require you to either set up your own jellyfin instance (a media server), or to have a nerdy pirate friend grant you access to theirs.
If you use Kodi , its remote control app can also be handy
Pfffew that took longer to write than I thought!
Paraview Transform Matrix
Nicoco · Tuesday, 7 June, 2022 - 00:00 · 1 minute
Paraview is a great tool for visualising 3D datasets. It also offers a few features to transform the data through filters, among which the Transform filter, which allows to interactively apply rotations and translations.
If you want to apply the same transformation outside of paraview, it becomes be a little tricky. While paraview gives you a nice GUI showing the rotation angles and the translation vector, it is unclear in which order the transformations are applied. Do we apply translation first? What is the order of the rotations?
It took me some time to figure it out, but the answer is:
I am sure there is a valid reason for this Z, X, Y order but boy did it take me some precious time to figure this out. Here's a little dirty python snippet that I am happy to share:
import numpy as np def paraview_transform(points, translation, xrot, yrot, zrot): """ points: array of shape (n_points, 3) translation: array of shape (3,) to copy from the paraview GUI xrot, yrot, zrot: floats to copy from the paraview GUI """ new_points = transform_points( points, get_transform_matrix( translation=translation, rotation_x=xrot, rotation_y=yrot, rotation_z=zrot ), ) return new_points def get_transform_matrix( translation=(0.0, 0.0, 0.0), rotation_x=0.0, rotation_y=0.0, rotation_z=0.0, radians=False, z_first=True, ): t = np.eye(4) t[:3, 3] = translation if not radians: rotation_x = np.deg2rad(rotation_x) rotation_y = np.deg2rad(rotation_y) rotation_z = np.deg2rad(rotation_z) x = np.eye(4) x[1, 1] = x[2, 2] = np.cos(rotation_x) x[2, 1] = np.sin(rotation_x) x[1, 2] = -np.sin(rotation_x) y = np.eye(4) y[0, 0] = y[2, 2] = np.cos(rotation_y) y[2, 0] = -np.sin(rotation_y) y[0, 2] = np.sin(rotation_y) z = np.eye(4) z[0, 0] = z[1, 1] = np.cos(rotation_z) z[1, 0] = np.sin(rotation_z) z[0, 1] = -np.sin(rotation_z) if z_first: res = t @ z @ x @ y else: res = t @ x @ y @ z return res.T def transform_points(points, transformation_matrix): source = np.ones((len(points), 4)) source[:, :3] = points new_points = source @ transformation_matrix return new_points[:, :3]