• chevron_right

      Erlang Solutions: Introducing Wardley Mapping to Your Business Strategy / PlanetJabber · Yesterday - 10:52 · 5 minutes

    Since it’s creation in 2005, Wardley Mapping has been embraced by UK government institutions and companies worldwide. This is thanks to its unique ability to factor both value and change into the strategising process. It’s a powerful, fascinating tool that far more organisations across the world should be implementing today to make key choices for their future growth.
    Ahead of my wider Wardley Mapping strategy discussion at GOTO Copenhagen this year, I have put this article together as an overview of Wardley Mapping for business strategy, including the fundamentals of creating a map, how to interpret it to form patterns, and why it works so effectively for corporate decision-making today.

    Wardley Mapping: An Abbreviated Overview

    In this article, I will go through the core aspects of how a Wardley Map is created so that you can begin to consider its use in your organisation.

    These maps can look complicated at first, particularly maps with more connections, but to get started you need to understand

    • Purpose
    • Anchors (Users)
    • Needs and Capabilities
    • The Value Chain (Vertical Axis)
    • Evolution (Horizontal Axis)


    To create a successful Wardley Map, you must first have a clearly defined purpose. This may be to solve a particular problem in your company, like a lack of sales, or to evaluate all of your solution features as an SaaS.

    Your purpose will define the overall scope of your map, and what you’re trying to achieve by creating it.

    The Anchor (User)

    Wardley Mapping is not a mindmap or a graph; it’s a map because it has an anchor. Simon Wardley, the inventor of Wardley Mapping, explains the difference between graphs and maps succinctly in this video .

    An anchor is what your strategy revolves around. If you’re Wardley Mapping to assess your market, for example, your main anchor would be your customers. Another simple example below shows a Wardley Map for an HR recruitment strategy, with a job candidate as the anchor.


    Source: HR Value

    You can find more information on this map and all the others I reference throughout this article in my curated collection of Wardley Mapping examples from a diverse set of business domaines.

    Needs and Capabilities

    The needs on a Wardley Map are simply what each of your anchors needs to have or achieve within what you’re mapping. In the example above, the candidate needs a job.

    Then, you have to map your current capabilities, which are all the things your anchor has to have to support their needs. Once you’ve identified needs and capabilities, you can start to connect them.

    This example for the Neeva search engine below shows a single, clear need for their search engine customers – find “something” – followed by a large number of capabilities, as well as offered features and what their competition is doing.


    Value Chain (Vertical Axis)

    As you start to create your map, you’ll naturally be creating a value chain based on how visible, and invisible, needs and capabilities are to your anchor.

    A good example of this is this review of the video games industry below, where streaming, social media and store locations are more visible to the Player and Influencer anchors than publishers or development platforms.


    Evolution (Horizontal Axis)

    The ability to track evolution and change is what sets Wardley Mapping apart from other business strategising tools. When mapping needs and capabilities, you must map along the horizontal axis across four stages:

    1. Genesis – rare, poorly understood, uncertain.
    2. Custom-built – people are starting to understand and consume this capability.
    3. Product (+ Rental) – consumption is rapidly increasing.
    4. Commodity/Utility/Basic Service – this capability is normalised and widespread.

    You can better understand each of these 4 states using this table, also found on this website and in Simon Wardley’s book on Wardley Mapping .

    The below map goes into great detail on the tech stack and capabilities of Airbnb and is a good example of how mapping across the value chain and evolution provides a unique visual only offered by Wardley Mapping.


    Strategising Through Wardley Mapping

    The above explains the fundamentals of designing a Wardley Map. There are also some helpful explainers on creating your first map on the Lean Wardley Mapping website .

    Once you know the basics of how to map, and you’ve practised enough to be confident in your results, you can implement more complicated strategies to use your map for key decision-making. Some approaches to this include:

    Understanding Patterns – how to spot the patterns between your capabilities.

    Climactic Patterns – these are the basic rules of the Wardley Map and how it evolves. You can use the table tracking these patterns to review potential changes to your strategy.

    Anticipating Changes – including more advanced connection-making like subgroups, inertia and the Red Queen effect.

    Doctrinal Principles – this is a table of universally useful patterns that you can apply to your Wardley Map to consider your current situation.

    Gameplay Patterns – this is a table of traditional business strategies you can use to influence your map and, by extension, your purpose or market.

    Fool’s Mate – a near-universal strategy for using your map to understand where to make changes through evolution.

    Why Wardley Mapping is Perfect for Business Strategy

    Wardley Mapping can be an invaluable tool in business planning and key decision-making thanks to the following factors.

    Topological Visualisation

    Wardley Mapping allows your business to visualise markets or scenarios topologically, meaning it takes into account things that change over time.

    In business, the one constant is that everything changes. Wardley Mapping can help you prepare for those changes, react ahead of time, and project key decisions for the future even before they’re necessary.

    Pattern Recognition

    The ability to view concepts in a changing environment also makes Wardley Mapping great for tracking patterns between capabilities or anchors.

    This allows you to consider how crucial decisions might impact other aspects of your business model, or identify which areas demand particular consideration.

    Uniting Senior Leaders

    Wardley Mapping allows all senior decision-makers to view the same data visualisation of a market or situation objectively.

    By then applying doctrines and patterns, management teams can discuss potential changes within a unifying environment, with a clear way to visualise and consider the impacts of these decisions through modifying the map.

    Wardley Mapping Strategy At BigCorp – GOTO Copenhagen 2023

    I’ve written this article as an introduction to my upcoming talk on October 3rd at GOTO Copenhagen 2023.

    From 09:40 – 10:30, I’ll be discussing how Wardley Mapping can be utilised in senior decision-making scenarios, as well as how I think it should be applied today.

    You can find out more about the talk, as well as the other speakers joining the conference this year, on GOTO Copenhagen’s website . I hope to see you there.

    The post Introducing Wardley Mapping to Your Business Strategy appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public /blog/introducing-wardley-mapping-to-your-business-strategy/

    • chevron_right

      Erlang Solutions: Our experts at Code BEAM Europe 2023 / PlanetJabber · 4 days ago - 16:29 · 2 minutes

    The biggest Erlang and Elixir Conference is coming to Berlin in October!

    Are you ready for a deep dive into the world of Erlang and Elixir? Mark your calendars, because Code BEAM Europe 2023 is just around the corner.

    With a lineup of industry pioneers and thought leaders, Code BEAM Europe 2023 promises to be a hub of knowledge sharing, innovation, and networking.

    Erlang Solutions’ experts are working hard to create presentations and training that explore the latest trends and practical applications.

    Here’s a sneak peek into what our speakers have prepared for this event:

    Natalia Chechina - speaker at Code BEAM Europe 2023

    Natalia Chechina:
    Observability at Scale

    In her talk, Natalia will share her experience and the rules of thumb for working with metrics and logs at scale. She will also cover the theory behind these concepts.



    Nelson Vides and Pawel Chrzaszcz (Code BEAM Europe 2023)


    Nelson Vides & Paweł Chrząszcz:
    Reimplementing technical debt with state machines

    A set of ideas on how to make a core protocol meant to be extensible implemented and stable: this talk is a must-attend for all legacy-code firefighters.



    Brian Underwood
    Do You Really Need Processes?

    A demo of a ride-sharing application which he created to understand what is possible with a standard Phoenix + PostgreSQL application.


    Visit Our Stand : Meet experts, win prizes

    Drop by our exhibition stand at the conference venue and meet the speakers! This is a fantastic opportunity to interact one-on-one with our experts, ask questions, and gain deeper insights into their presentations.

    Win a free Erlang Security Audit for your business

    We’re showcasing our brand new Erlang Security Audit , for companies whose code is rooted in Erlang and who want to safeguard their systems from vulnerabilities and security threats. Pop by to find out more!

    Looking for a messaging solution?

    Our super-experienced team will be on hand to understand your messaging needs and demo our expert MongooseIM capabilities.  If you’re an existing MongooseIM user, you could also win a specialist Health Check to ensure the optimum and smooth operation of your MongooseIM cluster. So come by and say hello.

    Book your ticket and see you in Berlin 19-20 Oct! If you can’t make it in person, you can always join us online.

    Upskill your team

    Make sure to also check out the training offer. The day before Code BEAM Europe, you have the opportunity to join tutorials with experts, including those from Erlang Solutions: Robert Virding, Francesco Cesarini, Natalia Chechina and Łukasz Pauszek.

    Check the training programme here >>

    The post Our experts at Code BEAM Europe 2023 appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Smart Sensors with Erlang and AtomVM: Smart cities, smart houses and manufacturing monitoring / PlanetJabber · Wednesday, 20 September - 14:56 · 8 minutes

    For our first article on IoT developments at Erlang Solutions, our goal is to delve into the use of Erlang on microcontrollers, highlighting and exposing its capabilities to run efficiently on smaller devices. For our inaugural article, we have chosen to address a pressing issue faced by numerous sectors- including healthcare, real estate management, travel, entertainment, and hospitality industries: air quality monitoring. The range of measurements that can be collected is vast and will vary from context to context so we decided to use just one example of the information that can be collected as a conversation starter.

    We will guide you through the challenges and demonstrate how Erlang/Elixir can be utilised to measure, analyse, make smart decisions, respond accordingly and evaluate the results.

    Air quality is assessed by reading a range of different metrics. Carbon dioxide (CO₂) concentration, particulate matter (PM), nitrogen dioxide (NO₂), ozone (O₃), carbon monoxide (CO) and sulfur dioxide (SO₂) are usually taken into account. This collection of metrics is currently referred to as VOC (volatile organic compounds). Some, but not all VOCs are human-made and are produced in different processes, whether by urbanization, manufacturing or during the production of other goods or services.

    We are measuring CO₂ in this prototype as an example for gathering environmental readings. CO₂ is a greenhouse gas naturally present in the atmosphere and its levels are influenced by many factors, including human activities such as burning fossil fuels.

    The specific technical challenge for this prototype was to run our application in very small and power-constrained scenarios. We choose to address this by trying out AtomVM as our alternative to the BEAM.

    AtomVM is a new, lightweight implementation of the BEAM virtual machine that is designed to run as a standalone Unix binary or can be embedded in microcontrollers such as STM32 , ESP32 and RP2040 .

    Unlike a single-board computer designed to run an operating system, a microcontroller is not purpose-built for a specific task. Instead it runs application-specific firmware, often with very low power consumption and with lower costs, making it ideal for operating IoT devices.

    Our device is composed of an ESP32 microcontroller, a BME280 sensor to measure pressure, temperature and relative humidity and a SDC40 sensor to measure CO2 PPM (Parts per Million).

    The ESP32 that we are going to use in this article is an ESP32-C3 , which is a low-cost single-core RISC-V microcontroller, obtainable from authorized distributors worldwide. The SDC40 sensor is a Sensirion and the BME280 sensor is a Bosch Sensortec . There might be cheaper manufacturers for those sensors, so feel free to choose according to your needs.

    Let’s get going!

    Getting dependencies ready

    For starters, we will need to have AtomVM installed, just follow the instructions on their website.

    It is important to follow the instructions as it guarantees that you will have a working Expressif ESP-IDF installation and you are able to flash the ESP32 microcontroller via the USB port using the esptool provided by the ESP-SDK tool suite.
    You will also need to have rebar3 installed as we are going to use it to manage the development cycle of the project.

    Bootstrapping our application

    First, we will need to create our application in order to start wiring things up on the software side. Use rebar3 for creating the application layout:

    % rebar3 new app name=co2
    ===> Writing co2/src/co2_app.erl
    ===> Writing co2/src/co2_sup.erl
    ===> Writing co2/src/
    ===> Writing co2/rebar.config
    ===> Writing co2/.gitignore
    ===> Writing co2/
    ===> Writing co2/

    Make sure to include the rebar3 plugins and dependencies before compiling the scaffold project by adding the following to your rebar.config file:

    {deps, [
        {atomvm_lib, {git, "", {branch, "master"}}}
    {plugins, [

    Recent atomvm_lib development updates have not yet been published to , so we use the master branch which has some fixes we need. Lastly, this dependency includes the BME280 driver that we are going to use.

    While we can boot the application now in our machine, we also need to implement an extra function that AtomVM will use as an entrypoint. The OTP entrypoint is defined in the file as {mod, {co2_app, []}}, which is the default module to use for starting an application. However, in AtomVM, we need to instruct the runtime to use a start/0 function defined within a module. That is, AtomVM does not start the same way standard OTP applications do. Therefore, some glue must be used:

    start() ->
        {ok, I2CBus} = i2c_bus:start(#{sda => 6, scl => 7}), %% I2C pins for the xiao esp32c3
        {ok, SCD} = scd40:start_link(I2CBus, [{is_active, true}]),
        {ok, BME} = bme280:start(I2CBus, [{address, 16#77}]),
        loop(#{scd => SCD, bme => BME}).
    loop(#{scd := SCD, bme := BME} = State) ->
        {ok, {CO2, Temp, Hum}} = scd40:take_reading(SCD),
        {ok, {Temp1, Press, Hum1}} = bme280:take_reading(BME),
           "[SCD] CO2: ~p PPM, Temperature: ~p C, Humidity: ~p%RH~n",
           [CO2, Temp, Hum]
          "[BME] Pressure: ~p hPa, Temperature: ~p C, Humidity: ~p%RH~n",
           [Press, Temp1, Hum1] 

    This module will start the main loop that reads from the sensors and displays the readings over the serial connection.

    We are using the stock BME280 driver that comes bundled with the atomvm_lib dependency, meaning that we only needed to change the address in which the BME280 sensor answers in the I2C bus.
    For the SCD40 sensor, we need to write some code. According to the SCD40 datasheet , in order to submit commands to the sensor, we need to wrap our commands with a START and STOP condition, signaling the transmission sequence. The sensor provides a range of features and functionality, but we are only concerned with starting periodic measurements and reading those values from the sensor’s memory buffer.

    %% 3.5.1 start_periodic_measurement
    do_start_periodic_measurement(#state{i2c_bus = I2CBus, address = Address}) ->
        batch_writes(I2CBus, Address, ?SCD4x_CMD_START_PERIODIC_MEASUREMENT),
    batch_writes(I2CBus, Address, Register) ->
        Writes =
    	 fun(I2C, _Addr) -> i2c:write_byte(I2C, Register bsr 8) end,     %% MSB
    	 fun(I2C, _Addr) -> i2c:write_byte(I2C, Register band 16#FF) end %% LSB
        i2c_bus:enqueue(I2CBus, Address, Writes).

    Once the SCD40 starts measuring the environment periodically we can read from the sensor every time a new reading is stored in memory:

    %% 3.5.2 read_measurement
    read_measurement(#state{i2c_bus = I2CBus, address = Address}) ->
        write_byte(I2CBus, Address, ?SCD4x_CMD_READ_MEASUREMENT bsr 8),
        write_byte(I2CBus, Address, ?SCD4x_CMD_READ_MEASUREMENT band 16#FF),
        case read_bytes(I2CBus, Address, 9) of
    	 <<C:2/bytes-little, _CCRC:1/bytes-little,
    	   T:2/bytes-little, _TCRC:1/bytes-little,
    	   H:2/bytes-little, _HCRC:1/bytes-little>>} ->
    	    %% 2 bytes in little endian for co2
    	    %% 2 bytes in little endian for temp
    	    %% 2 bytes in little endian for humidity
    	    <<C1, C2>> = C,
    	    <<T1, T2>> = T,
    	    <<H1, H2>> = H,
    	    {ok, {(C1 bsl 8) bor C2,
    		  -45 + 175 * (((T1 bsl 8) bor T2) / math:pow(2, 16)),
    		  100 * (((H1 bsl 8) bor H2) / math:pow(2, 16))}};
    	{error, _Reason} = Err ->

    According to the datasheet, the response we get from memory are 9 bytes that we need to unpack and convert. The response includes an 8-bit CRC checksum that we don’t take into account, but it would be useful to validate the sensor’s response. All the conversions above are according to the official datasheet’s basic command specifications.

    Flashing our application

    In order to get our application packed in AVM format and flashed to the esp32, we will need to add a rebar3 plugin that handles all those steps for us. It is possible to perform these steps manually but it can become tedious and error prone. By using rebar3 again we gain access to a more streamlined development process.

    Add the following to your rebar.config:

    {plugins, [
        {atomvm_rebar3_plugin, {git, "", {branch, "master"}}}

    Which will provide a few commands that we will use, mainly `esp32_flash` and `packbeam`. Due to the way the the plugin is implemented, calling `esp32_flash` will proceed on getting project dependencies, getting the application compiled and it’s beam files packed in an AVM format designed to be flashed on our device:

    % rebar3 esp32_flash --port /dev/tty.usbmodem2101

    Note: You must use a port that matches your own.

    Obtaining readings

    If everything goes according to plan we should be able to connect to our device and see the output of the readings over the serial port. But first, we need to issue the following command within the AtomVM/src/platforms/esp32 directory:

    % ESPPORT=/dev/tty.usbmodem2101 -b 115200 monitor

    Note: You must use a port that matches your own.

    The output should match something along these lines:

    [SCD] CO2: 848 PPM, Temperature: 2.91405487060546875000e+01 C, Humidity: 3.74832153320312500000e+01%RH
    [BME] Pressure: 7.46429999999999949978e+02 hPa, Temperature: 2.89406427826446881000e+01 C, Humidity: 3.84759374625173900000e+01%RH

    Closing remarks

    We have explored writing an Erlang application that can run on an ESP32 microcontroller using AtomVM, an alternate implementation of the BEAM. We also managed to read environmental metrics of our interest, such as temperature, humidity and CO2 for further processing.

    Our highlights include the possibility for manipulating binary data by using pattern matching and developer happiness.

    The ability to run Erlang applications on microcontrollers opens up a wide range of possibilities for IoT development. Erlang is a well-known language for building reliable and scalable applications, and its use on microcontrollers can help to ensure that these applications are able to handle the demands that IoT requires.

    External links

    The post Smart Sensors with Erlang and AtomVM: Smart cities, smart houses and manufacturing monitoring appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public /blog/smart-sensors-with-erlang-and-atomvm-smart-cities-smart-houses-and-manufacturing-monitoring/

    • chevron_right

      Snikket: State of Snikket 2023: Funding / PlanetJabber · Monday, 18 September - 13:20 · 7 minutes

    As promised in our ‘State of Snikket 2023’ overview post, and teased at the end of our first update post about app development, this post in the series is about that thing most of us open-source folk love to hate… money.

    We are an open-source project, and not-for-profit. Making money is not our primary goal, but like any business we have upstream expenses to pay - to compensate for the time and specialist work we need to implement the Snikket vision. To do that, we need income.

    This post will cover where our funding has come from over the last couple of years and where we’ve been spending it. We’ll also talk a bit about where we anticipate finding funding over the next year or so, and what some of that is budgeted for.

    Our last post on this topic was two years ago, when we announced the Open Technology Fund grant that allowed SuperBloom (then known as Simply Secure) to work on the UI/UX of the Snikket apps. Since then, other pieces of Snikket-related work have been supported by two more grants - both from projects managing funds dedicated to open source and open standards by the EU’s ( Next Generation Internet ) initiative.

    The first one was a project called DAPSI (Data Portability and Services Incubator), focused on enabling people to move their data more easily between different online services. DAPSI funded Snikket directly to support Matthew’s work on account portability standards , which can be used not only in the software projects underlying Snikket itself, but any and all XMPP software. This one helped keep Matthew fed for much of 2021, and as we described on our blog after the funding was confirmed , it kept him busy with:

    • Standardizing the necessary protocols and formats for account data import and export
    • Developing open-source easy-to-use tools that allows people to export, import and migrate their account between XMPP services
    • Building this functionality into Snikket

    The other grant was from the NGI Assure Fund administrated by NLnet . It was one Matthew applied for on behalf of the Prosody project, and helped keep him busy and fed through the second half of 2022 and into 2023. Prosody is the XMPP server project that the Snikket server software is built on, so any improvements there flow fairly directly to people using Snikket.

    NGI Assure is focused on improving the security of people’s online accounts, and their grant to Prosody was for work on bringing new security features like multi-factor authentication to XMPP accounts. The work included in the scope of the grant is now complete, and some of it is already available to be used. The rest will be boxed up over the coming months and released, to start finding its way into XMPP software.

    Both of these successful grant applications are practical examples of the Snikket company serving as a way to fund important work on the software and standards that the Snikket software and services depend on. Work that can be hard to fund any other way. However, grants like these usually cover a medium-to-long-term piece of work with a very specific scope, which can divert time away from other parts of the project. It is hard to find grants with a focus on general improvements, bug fixing and maintenance. This is the main reason why there hasn’t been as much work on the app side of things, nor updates on this blog.

    We very much appreciate the grants we’ve received from all these funders, and the important features they have enabled us to implement. But ultimately we see “side income” like grants as a short-term way to plug the holes in our financial bucket while we’re still getting up and running. Our long term goal, as a social enterprise ( specifically a UK-based Community Interest Company ), has always been to earn the income we need through donations and by providing commercial services to the community using Snikket software.

    When Snikket began, the main plan for this was to set up a hosting service, where people can pay a regular subscription to have us look after their Snikket server (more on this below). But over the last year or so we’ve discovered that there’s a lot to be gained from partnering with other social enterprises with shared values and related goals.

    One such company is , an innovative telephony company who provide phone numbers that can be used with XMPP apps, for both text messages and calls. They recently celebrated JMP’s official public launch a few months ago.

    We’re very grateful to JMP for funding the other half of Matthew’s work hours while he was beavering away on the NGI Assure grant work. Why were they willing to do that? To answer that, we need to tell you a bit more about what they do.

    During the six years their service has been in beta testing, JMP’s first priority has been developing software gateways to allow XMPP apps to communicate with mobile phone networks, and vice-versa. However, many of their customers are newcomers to the world of XMPP. They would often struggle to find suitable apps with the required features for their platform, and struggle to find good servers on which they can register their XMPP accounts.

    What could be a better solution to this problem than a project that aims to produce a set of easy-to-use XMPP-compliant apps with a consistent set of features across multiple platforms? Yes - Snikket complements their service wonderfully!

    So we have been collaborating a lot with JMP (or more generally, - their umbrella project for all their open-source projects, including JMP). On the app development side, we share code between Snikket Android and their Cheogram Android app (both are based on, and contribute back to, Conversations). We have also worked to ensure that iOS is not left behind, integrating features such as an in-call dial pad to Snikket iOS as well.

    If JMP customers don’t already have access to a hosted XMPP server and neither the time or skills to run their own, they need one of those too. So JMP have been suggesting Snikket’s hosting service to customers who don’t have an XMPP account yet. With all the necessary features for a smooth experience, easy setup and hosting available, Snikket ticks all the boxes. In fact the latest version of Cheogram allows you to launch your own Snikket instance directly within the app!

    A lot of work has been put into ensuring the hosting service is easy, scalable and reliable - to be ready for JMP’s launch traffic and also well into the future.

    But while JMP is an excellent partner, Snikket isn’t only about JMP. We’re preparing for our own service to also exit beta before the end of this year. Once we do, revenue from the service will help us cover the costs of continuing to grow and advance all of our goals. Pricing has not been set yet, but we’re aiming for a balance between sustainable and affordable.

    JMP will continue to sponsor half of Matthew’s time on the project. The other half is covered by our other supporters. You know who you are and we’re very grateful for your support.

    The income sources we’ve talked about so far pay for Matthew’s time to work on Snikket and related projects. We also appreciate the donations a number of people have made to the project via LiberaPay and GitHub sponsorships. These help us pay for incidental expenses like;

    • Project infrastructure, including this website, domain names, and push notification services and monitoring.

    • Development costs, like paying for an Apple developer account.

    • Travel costs of getting to conferences for presentations.

    One other important thing these donations help to pay for is test devices.

    We buy, or are donated, second-hand devices for developing and testing the Snikket apps. Used devices are much cheaper, so we can get more test devices for the same budget. Also, most people don’t get a brand new device every year, so these slightly older devices are more likely to match what the average person is using.

    Finally, we consider the environmental benefit. Using older but functional devices gives them a second life, preventing them from being needlessly scrapped, and keeping them out of the growing e-waste piles our societies now produce.

    So that’s everything there is to share on the topic of Snikket’s finances for now. But we’re not done with our ‘State of Snikket 2023’ updates, oh no.

    As we mentioned at the end of the last piece in this series, there’s at least one more coming, about new regulations for digital technology and online services. A number of governments around the world are passing or proposing laws that could affect Snikket - some of them a bit concerning - and we have a few things to say about them.

    We’re also going to sneak in a review of the inaugural FOSSY conference Matthew presented at recently.

    Watch this space!

    • wifi_tethering open_in_new

      This post is public /blog/snikket-2023-funding/

    • chevron_right

      JMP: Newsletter: Summer in Review / PlanetJabber · Wednesday, 13 September - 20:19 · 2 minutes

    Hi everyone!

    Welcome to the latest edition of your pseudo-monthly JMP update!

    In case it’s been a while since you checked out JMP, here’s a refresher: JMP lets you send and receive text and picture messages (and calls) through a real phone number right from your computer, tablet, phone, or anything else that has a Jabber client.  Among other things, JMP has these features: Your phone number on every device; Multiple phone numbers, one app; Free as in Freedom; Share one number with multiple people.

    Since our launch at the beginning of the summer, we’ve kept busy.  We saw some of you at the first FOSSY , which took place in July.  For those of you who missed it, the videos are out now .

    Automatic refill for users of the data plan is in testing now.  That should be fully automated a bit later this month and will pave the way for the end of the waiting list, at least for existing JMP customers.

    This summer also saw the addition of two new team members: welcome to Gnafu the Great who will be helping out with support, and Amolith , who will be helping out on the technical side.

    There have also been several releases of the Cheogram Android app ( latest is 2.12.8-2 ) with new features including:

    • Support for animated avatars
    • Show “hats” in the list of channel participants
    • An option to show related channels from the channel details area
    • Emoji and sticker autocomplete by typing ‘:’ (allows sending custom emoji)
    • Tweaks to thread UI, including no more auto-follow by default in channels
    • Optionally allow notifications for replies to your messages in channels
    • Allow selecting text and quoting the selection
    • Allow requesting voice when you are muted in a channel
    • Send link previews
    • Support for SVG images, avatars, etc.
    • Long press send button for media options
    • WebXDC importFiles and sendToChat support, allowing, for example, import and export of calendars from the calendar app
    • Fix Command UI in tablet mode
    • Manage permissions for channel participants with a dialog instead of a submenu
    • Ask if you want to moderate all recent messages by a user when banning them from a channel
    • Show a long streak of moderated messages as just one indicator

    To learn what’s happening with JMP between newsletters, here are some ways you can find out:

    Thanks for reading and have a wonderful rest of your week!

    • wifi_tethering open_in_new

      This post is public /b/septermber-newsletter-2023

    • chevron_right

      Erlang Solutions: Diversity & Inclusion at CodeBEAM Europe / PlanetJabber · Wednesday, 13 September - 13:31 · 1 minute

    Our Pledge to Diversity

    As technology becomes increasingly integrated into our lives, it’s crucial that the minds behind it come from diverse backgrounds. Different viewpoints lead to more comprehensive solutions, ensuring that the tech we create addresses the needs of a global audience.

    At Erlang Solutions, we believe that a diverse workforce is a catalyst for creativity and progress. By sponsoring the Diversity & Inclusion Programme for Code BEAM Europe 2023, we’re reinforcing our commitment to creating a tech landscape that is reflective of the world we live in.

    This initiative is not just about breaking down barriers; it’s about opening doors to new perspectives, ideas, and endless possibilities.

    At Erlang Solutions, we believe that diversity isn’t just a buzzword – it’s a fundamental pillar of progress. Our sponsorship of the Diversity & Inclusion Programme at Code BEAM aligns perfectly with our values. We’re excited to be part of an event that encourages open dialogue, showcases diverse talent, and paves the way for a more inclusive tech industry. – Jo Galt, Talent Manager.

    The goal of the programme is to increase the diversity of attendees and offer support to groups underrepresented in the tech community who would not otherwise be able to attend the conference.

    The Diversity & Inclusion Programme focuses primarily on empowering women, ethnic minorities or people with disabilities, among others, but everybody is welcome to apply .

    The post Diversity & Inclusion at CodeBEAM Europe appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Protected: Navigating the Unconventional: Introducing Erlang and Elixir / PlanetJabber · Thursday, 7 September - 16:08

    This content is password protected. To view it please enter your password below:


    The post Protected: Navigating the Unconventional: Introducing Erlang and Elixir appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public /blog/navigating-the-unconventional-introducing-erlang-and-elixir/

    • chevron_right

      Erlang Solutions: Pay down technical debt to modernise your technology estate / PlanetJabber · Thursday, 7 September - 16:07 · 5 minutes

    Imagine this scenario. Your CEO tells you the organisation needs a complete tech overhaul, then gives you a blank cheque and free rein. He tells you to sweep away the old and usher in the new. “No shortcuts, no compromise!” he cries. “Start from scratch and make it perfect!”

    And then you wake up. As we all know, this scenario is pure fantasy. Instead, IT leaders are faced with a constant struggle to keep up with the needs of the business, using limited resources, time-saving shortcuts and legacy systems.

    That’s the normal way of things and it can be a recipe for serious technical debt.

    What is technical debt?

    You’ve probably heard the term “technical debt”, even if there’s no agreement on what exactly it means.

    At its simplest, technical debt is the price you pay for an unplanned, non-optimised IT stack. There are many reasons for technical debt but in every definition, the debt tends to build over time and become more complex to solve.

    The phrase is intentionally analogous to financial debt. When you buy a house, you take out debt in the form of a mortgage. You get instant access to the thing you need – a home – but there are consequences down the line in the form of interest payments.

    As the metaphor suggests, technical debt is not always bad, just as financial debt is not always bad. It can be useful to do things quickly, as long as you’re prepared to tackle the consequences when they inevitably emerge. Unfortunately, lots of organisations take on the debt without thinking about the challenges to come.

    The challenges of technical debt

    How does technical debt come about? The simple answer is, in the normal cut and thrust of running a busy organisation.

    Source: McKinsey & Company

    • You create a temporary fix to a software problem and then don’t have time to design a better one. Over time, the temporary fix becomes a permanent part of your solution. The sticking plaster becomes the cure.
    • Your development teams are put under pressure to get something to market in super-quick time, to grasp a time-sensitive opportunity. They get it done – brilliantly – by making something that works well for the task in hand, but time-saving shortcuts slow up other systems in the longer term.
    • Finance refuses to replace legacy systems that just about work, even if they can’t offer the flexibility or speed that a modern digital-first organisation needs.

    In each case, complexity builds. One quick fix after another undermines the efficiency of your wider technology stack. Solutions work in isolation when they need to work together. Systems creak under the pressure of outdated or over complex code.
    What level of debt does this ad hoc activity accumulate? According to research by consultancy Mckinsey, tech debt amounts to between 20% and 40% of the worth of an organisations’ entire technology stack. The study also found that those with the lowest technical debt performed better.

    Source: Tech Target

    How to manage technical debt

    So what can you do about it? In short, the way to manage technical debt is to pay it off. Not necessarily all of it, because some debt is OK. But it should be nearer 5% than 40%.

    The first thing is to understand what your technical debt is, and what’s causing it. Some of this you may instinctively know, such as the slowdown in productivity caused by legacy infrastructure.

    Elsewhere, it can be relatively straightforward to identify the symptoms of an overly complex or outdated system:

    • When an engineer is assigned a support ticket, how long does it take to complete the task? Are average completion times increasing? If so, you’re accumulating technical debt.
    • Are you having to fix your fixes? Maybe an application requires reworking one week and then again a couple of weeks later. In this case, debt is building up.
    • If you often have to develop applications and solutions quickly, or patch legacy software to keep it running, you’re also likely to be accumulating technical debt.

    In business-critical applications, it’s worth analysing the quality of the underlying code. It might have been fine five years ago, but half a decade can be a long time in technology. Writing modern code in a more efficient language will likely create significant efficiencies.

    Don’t try and do everything

    Identifying and reducing technical debt is a resource-intensive task. But it can be made manageable if you focus on evolution rather than revolution.

    Mckinsey cites the example of a company that identified technical debt across 50 legacy applications but found that most of that debt was driven by fewer than 20. Every business is likely to have core assets that create most of its technical debt. They’re the ones to focus on.

    When you have identified the most debt-laden solutions, put the support, funding and governance in place to pay down the debt. Create meaningful KPIs and keep to them. Think about how to avoid debt accumulating again after modernisation projects are completed.

    Elixir and technical debt: commit to modern coding

    At the core of that future-proofing effort is code. Legacy coding techniques and languages create clunky, inefficient applications that will soon load your organisation with technical debt.

    One way to avoid that is through the use of Elixir, a simple, lightweight programming language that is built on top of the Erlang virtual machine.

    Elixir helps you avoid technical debt by creating uncomplicated, effective code. The simpler the code, the less likely it is to go wrong, and the easier it is to identify faults if it does.

    In addition, Elixir-based applications always tend to run at the optimal performance for their hardware environment, making the best use of your technology stack without incurring technical debt.

    In short, Elixir is a modern language that is designed for modern technology estates. It reduces technical debt through simplicity, efficiency and easy optimisation.

    Want to know more about efficient, effective development with Elixir, and how it can reduce your technical debt? Why not drop us a line?

    The post Pay down technical debt to modernise your technology estate appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public /blog/managing-technical-debt-organization-challenges-and-solutions/

    • chevron_right

      Erlang Solutions: What businesses should consider when adopting AI and machine learning / PlanetJabber · Thursday, 31 August - 09:29 · 5 minutes

    AI is everywhere. The chatter about chatbots has crossed from the technology press to the front pages of national newspapers. Worried workers in a wide range of industries are asking if AI will take their jobs.

    Away from the headlines, organisations of all sizes are getting on with the task of working out what AI can do for them. It will almost certainly do something. One survey puts AI’s potential boost to the global economy at an eye watering US$15.7tr by 2030 .

    Those major gains will come from productivity enhancements, better data-driven decision making and enhanced product development, among other AI benefits.

    It’s clear from all this that most businesses can’t afford to ignore the AI revolution. It has the very real potential to cut costs and create better customer experiences.

    Quite simply, it can make businesses better. If you’re not at least thinking about AI right now, you should be aware that your competitors probably are.

    So the question is, what AI tools and processes are the right ones for you, and how do you implement groundbreaking technology that actually works, without disrupting day-to-day workflows? Here are a few things to consider.

    AI or machine learning?

    The first thing is to be clear about what you mean by AI.

    AI and machine learning

    AI is an umbrella term for technologies that attempt to mimic human intelligence. Perhaps the most important, at least at the moment, is machine learning (ML).

    ML tools analyse existing data to create business-enhancing insight. The more data they’re exposed to, the more they ‘learn’ what to look out for. They find patterns and trends in mountains of information and do so at speed.

    As far as business is concerned, those patterns might pinpoint unusual sales trends, potential production bottlenecks, or hidden productivity issues. They might reveal a hundred other potential opportunities and challenges. The important thing to remember is that ML of this kind automates the ability to learn from what has gone before.

    ChatGPT and similar technologies, meanwhile, are part of a class of tools called generative AI. These applications also use machine learning techniques to mine huge datasets but do so for fundamentally different reasons.

    If ML looks back at existing materials, generative AI looks forward and creates new ones. One obvious role is in creating content, but generative tools can also produce code, business simulations and product designs.

    These two AI technologies can work together. For example, they might be tasked with automating the production of reports based on detailed analysis of a previous year’s data.

    Work out your goals for AI and machine learning

    Once you know a little about different AI tools, the next step is to understand what they can do for you. Begin with a business goal, not a technology.

    Start by identifying problems you need to solve, or opportunities you want to grasp. Maybe you want to create data-driven marketing campaigns for different customer segments. Maybe your website is crying out for a series of basic ‘how-to’ animations. Maybe you have processes that are ripe for automation?

    Whatever it is, the key is to identify the challenges you face and the opportunities you can take advantage of, and then mould an AI strategy that meets real business goals.

    Become a data-centric organisation

    As we’ve seen, AI and ML are dependent on data, and lots of it. The more data they can use, the more accurate and useful they tend to be.

    But data can be problematic. It is diverse, fragmented and often unstructured. It needs to be stored and moved securely and in line with relevant privacy regulations. All of this means that to make it valuable, you need to create a data management strategy.

    That strategy needs to address challenges related to data sourcing, storage, quality, governance, integration, analysis and culture.

    Corporate data is typically spread across an organisation and often found squirrelled away in the silos of legacy technology systems. It needs to be pooled, formatted and made accessible to the AI and ML tools of different departments and business units. Data is only useful when it’s available.

    AI and machine learning: Start small and keep it simple

    All of this makes implementing AI and ML sound like a highly time-consuming and complex undertaking. But it needn’t be, and especially not at first.

    The secret to most successful technology implementations is to start small and simple. That’s doubly true with something as potentially game-changing as AI and ML.

    For example, start applying ML tools to just a small section of your data, rather than trying to do too much too soon. Pick a specific challenge that you have, focus on it, and experiment with refining processes to achieve better results. Then increase AI use incrementally as the technology proves its worth.

    Bring your team with you

    Much of the recent publicity around AI has focused on doom-laden predictions of mass unemployment. When you talk about adopting AI and ML in your organisation, employee alarm bells may start ringing, which could have serious implications for staff morale and productivity.

    But in most organisations, AI is about augmenting human effort, not replacing it. AI and ML can automate the mundane tasks people don’t like doing, freeing them up for more creative activity. It can provide insight that improves human decision making, but humans still make the decisions. It is far from perfect, and human oversight of AI is required at every step.

    Your communications around the implementation of AI should emphasise these points. AI is a tool for your people to use, not a substitute for their efforts.

    Elixir and Erlang machine learning

    As businesses become familiar with AI and ML tools, they may start creating their own, tailored to their specific needs and circumstances. Organisations that develop and modify AI and ML tools increasingly do so using Elixir, a programming language based on the Erlang Virtual Machine (VM).

    Elixir is perfect for creating scalable AI applications for three core reasons:

    • Concurrency: Elixir is designed to handle lots of tasks simultaneously, which is ideal for AI applications that have to process large amounts of data from different sources.
    • Functional programming: Elixir focuses on function – reaching a desired goal as simply as possible. That’s perfect for AI because the simpler your AI algorithms, the more reliable they are likely to be.
    • Distributed computing: AI applications demand significant computational resources that developers spread across multiple machines. Elixir offers in-built distribution capabilities, making distributed computing straightforward.

    In addition, Elixir is supported by a wide range of libraries and tools, providing ready-made solutions to challenges and shortening the development journey.

    The result is AI applications that are efficient, scalable and reliable. That’s hugely important because as AI and ML become ever more crucial to business success, effective applications and processes will become a fundamental business differentiator. AI isn’t something you can ignore. If you aren’t already, start thinking about your own AI and ML strategy today.

    Want to know more about efficient, effective AI development with Elixir? Talk to us .

    The post What businesses should consider when adopting AI and machine learning appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public /blog/what-businesses-should-consider-adopting-ai-machine-learning/