• chevron_right

      The 5 Best Tools to Create a Bootable USB From an ISO in Linux / linuxyournal · Yesterday - 16:00 edit · 2 minutes

    The 5 Best Tools to Create a Bootable USB From an ISO in Linux

    Creating a bootable USB drive is a cornerstone skill for anyone interested in exploring different operating systems or working in system administration. A bootable USB drive allows a user to boot into a different operating system, independent of the primary OS installed on the machine. This is particularly useful for system recovery, testing new OS builds, or installing a new system altogether. Linux, known for its robustness and versatility, offers a plethora of tools for creating bootable USB drives from ISO files, which are exact copies of disk data. This guide aims to delve into the top five tools available on Linux for crafting bootable USB drives from ISO files.

    Understanding ISO Files

    ISO files are disk image files that encapsulate the file system and the data content of a disk. They serve as exact digital replicas of optical disk data, whether it be a CD, DVD, or Blu-ray disk. The importance of ISO files in creating bootable USB drives cannot be overstated. They act as the source blueprint from which the bootable drive is created, ensuring that the resulting USB drive is an exact copy of the original disk data, necessary for correct operating system functionality and booting.

    Top 5 Tools for Creating a Bootable USB in Linux


    UNetbootin (Universal Netboot Installer) is a free and open-source tool that has been around for many years. It is widely recognized for its ease of use and support for a variety of operating systems.

    Upon launching UNetbootin, you're presented with the option to either download a distribution or use a pre-downloaded ISO file. Select the ISO file, choose the USB drive you want to write to, and click on the 'OK' button to start the creation process.


    Rufus is known for its speed and reliability. Though originally designed for Windows, it also operates on Linux. It's a small utility that packs a punch, offering a range of system file types to cater to different OS requirements.

    Open Rufus, select the ISO file under the 'Boot selection' section, choose the USB drive under 'Device', set the desired system parameters under 'Partition scheme' and 'File system', and hit 'Start'.

    Etcher (BalenaEtcher)

    Etcher provides a clean and straightforward interface, eliminating the hassle often associated with creating bootable USB drives. Its three-step process is simple and intuitive.

    Launch Etcher, click 'Flash from file' to select your ISO file, choose your USB drive under 'Select target', and click 'Flash!' to begin the process.

    dd (Disk Dump)

    dd is a powerful disk copying tool that operates on a command-line interface. It's native to Unix-like systems and is revered for its flexibility and advanced features.

    Značky: #Linux

    • chevron_right

      Core Knowledge That Modern Linux Kernel Developer Should Have / linuxyournal · 2 days ago - 16:00 edit · 1 minute

    Core Knowledge That Modern Linux Kernel Developer Should Have

    The Linux Kernel is written in C programming language, so C is the most important language for the Linux Kernel developer. Initially, the kernel was written in GNU C (now it is also possible to build it using LLVM) which extends standard C with some additional keywords and attributes. I would recommend learning some modern C version like C11 and additionally learning GNU extensions to be able to read kernel code effectively. Small, architecture-specific parts of the kernel and some highly optimized parts of several drivers are written in assembly language. This is the second language of choice. There are 3 main architectures nowadays: x86, ARM, and RISC-V. What assembly language to choose depends on your hardware platform.

    You definitely should look at Rust which is gaining popularity in the Linux Kernel community as a more safer and reliable alternativeto C.

    Linux is a highly configurable system and its configurability is based on the kernel build system, KBuild. Each developer should know the basics of KBuild and Make to be able to successfully extend/modify the kernel code. Last, but not least is shell scripting. It is hard to imagine Kernel development without command-line usage and a developer inevitably has to write some shell scripts to support their job by automating repetitive tasks.

    Software environment

    The Linux Kernel development is inextricably linked to the Git source control system. It is not possible to imagine nowadays the kernel development workflow without it. So, Git knowledge is a requirement.

    Unless kernel developers run their kernel on specific/customized hardware - emulation is the best developer's friend. The most popular platform for this is Qemu/KVM. A typical workflow looks like this: a developer introduces some changes to the kernel or a driver, builds it, copies it under a virtual environment, and tests it there. If all is OK, then the developer tests these changes on real hardware, but if something goes wrong, then the kernel under the virtual machine crashes.  In this case, it is quite easy to just shut down VM, fix the error and repeat the development/debug cycle. If we didn't have virtualization we would restart the real machine on each kernel crash and development time would increase in order of magnitude.

    Značky: #Linux

    • chevron_right

      Linux Networking: A Simplified Guide to IP Addresses and Routing / linuxyournal · Thursday, 21 September - 16:00 edit · 1 minute

    Linux Networking: A Simplified Guide to IP Addresses and Routing

    Every Linux enthusiast or administrator, at some point, encounters the need to configure or troubleshoot network settings. While the process can appear intimidating, with the right knowledge and tools, mastering Linux networking can be both enlightening and empowering. In this guide, we'll explore the essentials of configuring IP addresses and routing on Linux systems.

    Understanding Basic Networking Concepts

    What is an IP address?

    Every device connected to a network has a unique identifier known as an IP address. This serves as its 'address' in the vast interconnected world of the Internet.

    • IPv4 vs. IPv6 : While IPv4 is still prevalent, its successor, IPv6, offers a larger address space and improved features. IPv4 addresses look like , whereas IPv6 addresses resemble 1200:0000:AB00:1234:0000:2552:7777:1313 .

    • Public vs. Private IPs : Public IPs are globally unique and directly reachable over the Internet. Private IPs are reserved for internal network use and are not routable on the public Internet.

    Subnet Masks and Gateways

    A subnet mask determines which portion of an IP address is the network and which is the host. The gateway, typically a router, connects local networks to external networks.


    At its core, routing is the mechanism that determines how data should travel from its source to its destination across interconnected networks.

    Network Configuration Tools in Linux

    Linux offers both traditional tools like ifconfig and route and modern ones like ip , nmcli , and nmtui . The choice of tool often depends on the specific distribution and the administrator's preference.

    NetworkManager and systemd-networkd have also modernized network management, providing both CLI and GUI tools for configuration.

    Configuring IP Addresses in Linux

    1. Using the ip command :

      • Display Current Configuration : ip addr show
      • Assign a Static IP : ip addr add dev eth0
      • Remove an IP Address : ip addr del dev eth0
    2. Using nmcli for NetworkManager :

    Značky: #Linux

    • chevron_right

      A Brief Story of Time and Timeout / linuxyournal · Thursday, 24 August - 16:00 edit · 1 minute

    A Brief Story of Time and Timeout

    When working in a Linux terminal, you often encounter situations where you need to monitor the execution time of a command or limit its runtime. The time and timeout commands are powerful tools that can help you achieve these tasks. In this tutorial, we'll explore how to use both commands effectively, along with practical examples.

    Using the time Command

    The time command in Linux is used to measure the execution time of a specified command or process. It provides information about the real, user, and system time used by the command. The real time represents the actual elapsed time, while the user time accounts for the CPU time consumed by the command, and the system time indicates the time spent by the system executing on behalf of the command.

    time [options] command

    Let's say you want to measure the time taken to execute the ls command:

    time ls

    The output will provide information like:

    real    0m0.005s
    user    0m0.001s
    sys     0m0.003s

    In this example, the real time is the actual time taken for the command to execute, while user and sys times indicate CPU time spent in user and system mode, respectively.

    Using the timeout Command

    The timeout command allows you to run a command with a specified time limit. If the command does not complete within the specified time, timeout will terminate it. This can be especially useful when dealing with commands that might hang or run indefinitely.

    timeout [options] duration command

    Suppose you want to limit the execution of a potentially time-consuming command, such as a backup script, to 1 minute:

    timeout 1m ./

    If completes within 1 minute, the command will finish naturally. However, if it exceeds the time limit, timeout will terminate it.

    By default, timeout sends the SIGTERM signal to the command when the time limit is reached. You can also specify which signal to send using the -s (--signal) option.

    Combining time and timeout

    You can also combine the time and timeout commands to measure the execution time of a command within a time-constrained environment.

    Značky: #Linux

    • chevron_right

      UNIX vs Linux: What's the Difference? / linuxyournal · Tuesday, 22 August - 16:00 edit · 2 minutes

    UNIX vs Linux: What

    In the intricate landscape of operating systems, two prominent players have shaped the digital realm for decades: UNIX and Linux. While these two systems might seem similar at first glance, a deeper analysis reveals fundamental differences that have implications for developers, administrators, and users. In this comprehensive article, we embark on a journey to uncover the nuances that set UNIX and Linux apart, shedding light on their historical origins, licensing models, system architectures, communities, user interfaces, market applications, security paradigms, and more.

    Historical Context

    UNIX, a pioneer in the world of operating systems, emerged in the late 1960s at AT&T Bell Labs. Developed by a team led by Ken Thompson and Dennis Ritchie, UNIX was initially created as a multitasking, multi-user platform for research purposes. In the subsequent decades, commercialization efforts led to the rise of various proprietary UNIX versions, each tailored to specific hardware platforms and industries.

    In the early 1990s, a Finnish computer science student named Linus Torvalds ignited the open-source revolution by developing the Linux kernel. Unlike UNIX, which was mainly controlled by vendors, Linux leveraged the power of collaborative development. The open-source nature of Linux invited contributions from programmers across the globe, leading to rapid innovation and the creation of diverse distributions, each with unique features and purposes.

    Licensing and Distribution

    One of the most significant differentiators between UNIX and Linux lies in their licensing models. UNIX, being proprietary, often required licenses for usage and customization. This restricted the extent to which users could modify and distribute the system.

    Conversely, Linux operates under open-source licenses, most notably the GNU General Public License (GPL). This licensing model empowers users to study, modify, and distribute the source code freely. The result is a plethora of Linux distributions catering to various needs, such as the user-friendly Ubuntu, the stability-focused CentOS, and the community-driven Debian.

    Kernel and System Architecture

    The architecture of the kernel—the core of an operating system—plays a crucial role in defining its behavior and capabilities. UNIX systems typically employ monolithic kernels, meaning that essential functions like memory management, process scheduling, and hardware drivers are tightly integrated.

    Linux also utilizes a monolithic kernel, but it introduces modularity through loadable kernel modules. This enables dynamic expansion of kernel functionality without requiring a complete system reboot. Furthermore, the collaborative nature of Linux development ensures broader hardware support and adaptability to evolving technological landscapes.

    Značky: #Linux

    • chevron_right

      The 8 Best SSH Clients for Linux / linuxyournal · Thursday, 17 August - 16:00 edit · 1 minute

    The 8 Best SSH Clients for Linux


    SSH, or Secure Shell, is a cryptographic network protocol for operating network services securely over an unsecured network. It's a vital part of modern server management, providing secure remote access to systems. SSH clients, applications that leverage SSH protocol, are an essential tool for system administrators, developers, and IT professionals. In the world of Linux, where remote server management is common, choosing the right SSH client can be crucial. This article will explore the 8 best SSH clients available for Linux.

    The Criteria for Selection

    When selecting the best SSH clients for Linux, several factors must be taken into consideration:


    The speed and efficiency of an SSH client can make a significant difference in day-to-day tasks.

    Security Features

    With the critical nature of remote connections, the chosen SSH client must have robust security features.

    Usability and Interface Design

    The client should be easy to use, even for those new to SSH, with a clean and intuitive interface.

    Community Support and Documentation

    Available support and comprehensive documentation can be essential for troubleshooting and learning.

    Compatibility with Different Linux Distributions

    A wide compatibility ensures that the client can be used across various Linux versions.

    The 8 Best SSH Clients for Linux



    OpenSSH is the most widely used SSH client and server system. It’s open-source and found in most Linux distributions.


    • Key management
    • SCP and SFTP support
    • Port forwarding
    • Strong encryption

    Installation Process

    OpenSSH can be installed using package managers like apt-get or yum .

    Pros and Cons


    • Highly secure
    • Widely supported
    • Flexible


    • Can be complex for beginners


    PuTTY is a free and open-source terminal emulator. It’s known for its simplicity and wide range of features.


    • Supports SSH, Telnet, rlogin
    • Session management
    • GUI-based configuration

    Installation Process

    PuTTY can be installed from the official website or through Linux package managers.

    Pros and Cons


    • User-friendly
    • Extensive documentation


    Značky: #Linux

    • chevron_right

      Linux Containers Unleashed: A Comprehensive Guide to the Technology Revolutionizing Modern Computing / linuxyournal · Tuesday, 15 August - 16:00 edit · 1 minute

    Linux Containers Unleashed: A Comprehensive Guide to the Technology Revolutionizing Modern Computing

    Definition of Linux Containers

    Linux Containers (LXC) are a lightweight virtualization technology that allows you to run multiple isolated Linux systems (containers) on a single host. Unlike traditional virtual machines, containers share the host system's kernel, providing efficiency and speed.

    Brief History and Evolution

    The concept of containerization dates back to the early mainframes, but it was with the advent of chroot in Unix in 1979 that it began to take a recognizable form. The Linux Containers (LXC) project, started in 2008, brought containers into the Linux kernel and laid the groundwork for the popular tools we use today like Docker and Kubernetes.

    Importance in Modern Computing Environments

    Linux Containers play a vital role in modern development, enabling efficiency in resource usage, ease of deployment, and scalability. From individual developers to large-scale cloud providers, containers are a fundamental part of today's computing landscape.

    Linux Containers (LXC) Explained


    Containers vs. Virtual Machines

    While Virtual Machines (VMs) emulate entire operating systems, including the kernel, containers share the host kernel. This leads to a significant reduction in overhead, making containers faster and more efficient.

    The Kernel's Role

    The Linux kernel is fundamental to containers. It employs namespaces to provide isolation and cgroups for resource management. The kernel orchestrates various operations, enabling containers to run as isolated user space instances.

    User Space Tools

    Tools like Docker, Kubernetes, and OpenVZ interface with the kernel to manage containers, providing user-friendly commands and APIs.



    Containers provide process and file system isolation, ensuring that applications run in separate environments, protecting them from each other.

    Resource Control

    Through cgroups, containers can have resource limitations placed on CPU, memory, and more, allowing precise control over their utilization.

    Network Virtualization

    Containers can have their network interfaces, enabling complex network topologies and isolation.

    Popular Tools


    Docker has become synonymous with containerization, offering a complete platform to build, ship, and run applications in containers.


    Kubernetes is the de facto orchestration system for managing containerized applications across clusters of machines, providing tools for deploying applications, scaling them, and managing resources.


    OpenVZ is a container-based virtualization solution for Linux, focusing on simplicity and efficiency, particularly popular in VPS hosting environments.

    Značky: #Linux

    • wifi_tethering open_in_new

      This post is public /content/linux-containers-unleashed-comprehensive-guide-technology-revolutionizing-modern-computing

    • chevron_right

      Minarca: A Backup Solution You'll Love / linuxyournal · Wednesday, 7 June - 16:00 edit · 2 minutes

    Minarca: A Backup Solution You


    Data backup is a crucial aspect of information management. Both businesses and individuals face risks such as hard drive failure, human error or cyberattacks, which can cause the loss of important data. There are many backup solutions on the market, but many are expensive or difficult to use.

    That's where Minarca comes in. Developed by Patrik Dufresne of IKUS Software, Minarca is an open source backup solution designed to offer a simplified user experience while providing management and monitoring tools for system administrators. So let's take a closer look at how Minarca came about and how it compares to other solutions.

    History and evolution of the project

    Minarca is a data backup software, whose name comes from the combination of the Latin words "mi" and "arca", meaning "my box" or "my safe". The Minarca story begins with Rdiffweb, a web application developed in 2006 by Josh Nisly and other contributors to serve as the web interface to rdiff-backup.

    In 2012, Patrik Dufresne became interested in Rdiffweb and decided to improve its graphical interface. Since then, Rdiffweb has continued to evolve, including permissions management, quota management, reporting, statistical analysis, notifications and LDAP integration. However, Rdiffweb has remained a tool for technically competent people who are able to configure an SSH server, secure it and install rdiff-backup on all the machines to be backed up from the command line.

    It was with the goal of making data backup more accessible to less technical users that the development of Minarca began in 2014, building on the work done in Rdiffweb. The goal was to provide a fully integrated, turnkey, easy-to-use solution.

    Since its inception, Minarca has gone through several versions, including an early version of the agent in Java for Linux and Windows. In 2020, the agent was rewritten in Python to better support Linux, Windows and MacOS operating systems. Minarca is now a complete data backup solution that is accessible to everyone, regardless of technical skill levels.

    The benefits of Minarca

    Comparison with Rdiffweb

    Minarca is the logical continuation of the Rdiffweb web application. Developed to provide a simplified backup experience, Minarca is designed to support administrators and users. Unlike Rdiffweb, Minarca offers rapid deployment on Linux, Windows and MacOS through the Minarca agent. In addition, Minarca manages the storage space, simplifies SSH key exchange and supports multiple versions of rdiff-backup simultaneously. In addition, Minarca improves security by isolating the execution of backups, thus enhancing the protection of sensitive data.

    Značky: #Linux

    • chevron_right

      Troubleshooting the "Temporary Failure in Name Resolution" Error in Linux / linuxyournal · Wednesday, 26 April - 16:00 edit · 2 minutes

    Troubleshooting the


    Linux users may encounter the "Temporary failure in name resolution" error while trying to access websites or execute networking commands. This error indicates that the system is unable to translate a domain name into its corresponding IP address. Several factors can contribute to this error, including network connectivity issues, incorrect configuration of the resolv.conf file, and firewall restrictions. In this guide, we will explore the common causes of this error and provide solutions to help you resolve the issue.

    Common Causes and Solutions

    Slow or No Internet Connection

    Before troubleshooting further, it's essential to check your internet connectivity. A slow or disconnected internet connection may be the root cause of the "Temporary failure in name resolution" error.


    Confirm that your system has a stable and working internet connection. If your internet connection is slow or disconnected, try to fix the connectivity issue before proceeding.

    Badly Configured resolv.conf File

    The resolv.conf file is responsible for configuring DNS servers on Linux systems. If this file is not set up correctly, the system may fail to resolve domain names.


    Start by opening the resolv.conf file in a text editor such as nano:

    sudo nano /etc/resolv.conf

    Ensure that at least one nameserver is defined in the resolv.conf file. A valid nameserver entry should look like this:


    If there is no nameserver defined in the file, add one. Some well-known nameservers owned by Google are and . After making the changes, save the file and restart the DNS resolver service:

    sudo systemctl restart systemd-resolved.service

    Verify that the DNS server is working correctly by pinging a website:


    If communication is established with the website, the DNS server is working correctly.

    Misconfigured resolv.conf File Permissions

    If the resolv.conf file contains valid DNS servers, but the error persists, it may be due to incorrect file permissions.


    Change the ownership of the resolv.conf file to the root user:

    sudo chown root:root /etc/resolv.conf

    Modify the file permissions to allow all users on the system to read the file:

    sudo chmod 644 /etc/resolv.conf

    Try pinging a website again to check if the issue is resolved.

    Firewall Restrictions

    Firewall restrictions may block access to necessary ports, causing the error. Ports 43 (used for whois lookup) and 53 (used for domain name resolution) are essential for DNS queries.

    Značky: #Linux