• chevron_right

      AlmaLinux says Red Hat source changes won’t kill its RHEL-compatible distro

      news.movim.eu / ArsTechnica · Monday, 24 July, 2023 - 19:38

    AlmaLinux's live media, offering a quick spin or installation.

    Enlarge / AlmaLinux lets you build applications that work with Red Hat Enterprise Linux but can't promise the exact same bug environment. That's different from how they started, but it's also a chance to pick a new path forward. (credit: AlmaLinux OS)

    I asked benny Vasquez, chair of the AlmaLinux OS Foundation, how she would explain the recent Red Hat Enterprise Linux source code controversy to somebody at a family barbecue—somebody who, in other words, might not have followed the latest tech news quite so closely.

    "Most of my family barbecues are going to be explaining that Linux is an operating system," Vasquez said. "Then explaining what an operating system is."

    It is indeed tricky to explain all the pieces—Red Hat, Red Hat Enterprise Linux, CentOS, CentOS Stream, Fedora, RHEL, Alma, Rocky, upstreams, downstreams, source code, and the GPL—to anyone who isn't familiar with Red Hat's quirky history , and how it progressed to the wide but disparate ecosystem it has today. And, yes, Linux in general. But Vasquez was game to play out my thought experiment.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Red Hat’s new source code policy and the intense pushback, explained

      news.movim.eu / ArsTechnica · Friday, 30 June, 2023 - 15:53

    Man wearing fedora in red light

    Enlarge / A be-hatted person, tipping his brim to the endless amount of text generated by the conflict of corporate versus enthusiast understandings of the GPL. (credit: Getty Images)

    When CentOS announced in 2020 that it was shutting down its traditional "rebuild" of Red Hat Enterprise Linux (RHEL) to focus on its development build, Stream, CentOS suggested the strategy "removes confusion." Red Hat, which largely controlled CentOS by then, considered it "a natural, inevitable next step."

    Last week, the IBM-owned Red Hat continued " furthering the evolution of CentOS Stream " by announcing that CentOS Stream would be "the sole repository for public RHEL-related source code releases," with RHEL's core code otherwise restricted to a customer portal. (RHEL access is free for individual developers and up to 16 servers , but that's largely not what is at issue here).

    Red Hat's post was a rich example of burying the lede and a decisive moment for many who follow the tricky balance of Red Hat's open-source commitments and service contract business. Here's what followed.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Au fait, pourquoi Red Hat s’appelle Red Hat ?

      news.movim.eu / Numerama · Sunday, 19 June, 2022 - 18:28

    Red Hat

    C'est une distribution Linux qui figure parmi les plus connues. Sa maison-mère a été rachetée par IBM pour des dizaines de milliards de dollars. Il s'agit de Red Hat. Et le nom de l'entreprise a une histoire. [Lire la suite]

    Abonnez-vous aux newsletters Numerama pour recevoir l’essentiel de l’actualité https://www.numerama.com/newsletter/

    • Pr chevron_right

      Red Hat Virtualization All-in-One

      pubsub.slavino.sk / practical-admin · Saturday, 21 March, 2020 - 22:32 edit · 10 minutes

    This was originally published as a Gist to help some peers, however since those are hard to discover I am reposting here.

    The goal is to install Red Hat Virtualization onto a single host, for example a home lab. I’m doing this with Red Hat Enterprise Linux (7.7) and RHV, but will work just as well with CentOS and oVirt. If you’re using RHV, note that this is not supported in any way!

    The Server

    My home lab consists of a single server with adequate CPU and RAM to host a few things. I’m using an old, small HDD for the OS and a larger NVMe drive will be used for VMs. My host has only a single 1GbE link, which isn’t an issue since I’m not expecting to use remote storage and, being a lab, it’s ok if it doesn’t have network access due to a failure…it’s a lab!

    You can use pretty much whatever hardware with whatever config you like for hosting your virtual machines. The only real requirement is about 6GB of RAM for the OS + hosted engine, but you’ll want more than that to host other VMs.

    Install and configure

    Before beginning, make sure you have forward and reverse DNS working for the hostnames and IPs you’re using for both the hypervisor host and RHV Manager. For example:

    • 10.0.101.20 = rhvm.lab.com
    • 10.0.101.21 = rhv01.lab.com

    With that out of the way, let’s deploy and configure RHV!

    1. Install RHEL hypervisor OS

      I’m using RHEL, not RHV-H, because it’s easier to manage and add pacakges (such as an NFS server). I’m going to assume that you’ve verified the CPU virtualization extensions, etc. have been enabled via BIOS/UEFI. If you’re feeling particularly paranoid, use the virt-host-validate qemu command to check that the host is configured for virtualization.

      If you are using a single, shared drive instead of separate drives for OS and VMs, I highly recommend allocating about 40GiB for the RHEL OS and reserving the remainder for VM storage domains. If you’re using some kind of hardware or software RAID for the OS drive, configure however you like.

      After installing RHEL 7.7, register and attach it to the appropriate pool.

    2. Add the needed repos and update, install cockpit

      Following the docs here , enable the repos and update the host.

      # enable the needed repos
      subscription-manager repos \
       --disable='*' \
       --enable=rhel-7-server-rpms \
       --enable=rhel-7-server-rhv-4-mgmt-agent-rpms \
       --enable=rhel-7-server-ansible-2-rpms
      
      # update everything
      yum -y update
      
      # install cockpit with the various add-ons
      yum -y install cockpit-ovirt-dashboard
      
      # enable cockpit and open the firewall
      systemctl enable cockpit.socket
      firewall-cmd --permanent --add-service=cockpit
      
      # since a Kernel update probably got installed, reboot the host.  If not, 
      # start cockpit, reload the firewall, and skip this reboot.
      reboot

    3. Configure host storage and management network

      If you have a fancy storage setup for VM storage (RAID0,1,5,6,10; ZFS; whatever) now is the time to do it. Same for any network config (bond, etc.) needed for management (VM networks come later) that wasn’t done pre-install.

      My host, with a separate NVMe drive for VMs, was configured using Cockpit. The drive was formatted for LVM, and in the VolGroup, I created a thin pool, which then has two (thin) volumes:

      • rhv_she – 100GiB
      • rhv_data – remaining capacity

      These volumes are formatted using XFS and mounted to /mnt/rhv_she and /mnt/rhv_data respectively. Last, but not least, set permissions:

      # set permissions for the vdsm user and kvm group
      chown 36:36 /mnt/*

      I’m using NFS to create the illusion of shared storage, just in case I have a second+ host later.

      # create the exports file, substitute your subnet below
      cat << EOF > /etc/exports
      /mnt/rhv_she 10.0.101.0/24(rw,async,no_root_squash)
      /mnt/rhv_data 10.0.101.0/24(rw,async,no_root_squash)
      EOF
      
      # enable the server
      systemctl enable --now nfs-server
      
      # allow access
      firewall-cmd --permanent --add-service=nfs

      Test NFS:

      mkdir /mnt/test && mount hostname_or_ip:/mnt/rhv_she /mnt/test
      date > /mnt/test/can_touch_this
      rm /mnt/test/*
      umount /mnt/test
      rmdir /mnt/test

    4. Using Cockpit, deploy RHV Manager

      Follow the docs here .

      I assign 2 vCPUs and 4GiB RAM to the VM. It may complain. It’ll be fine.

      Once ready, click the next button, it’ll prepare and stage some things, including downloading the Self-Hosted Engine (SHE) VM template. Note that this is a few GiB in size, so it may take a while if your internet is slow.

      At some point, it will ask for the storage you want to use for SHE. Point it to the NFS export for rhv_she , e.g. 10.0.101.21:/mnt/rhv_she . The disk size should be pre-populated around 80GiB, I leave it at the default value since the underlying LVM volume is thin provisioned anyway.

    5. Configure and update RHV Manager

      Start by putting the HA cluster for SHE into maintenance mode. From the hypervisor node…

      # From the hypervisor node, set maintenance mode
      hosted-engine --set-maintenance --mode=global

      SSH to the RHV-M virtual machine and follow the docs .

      # ssh to the RHV-M / SHE virtual machine
      ssh hostname_or_ip_of_hosted_engine
      
      # register and attach
      subscription-manager register
      subscription-manager attach --pool=blahblahblah
      
      # add the repos
      subscription-manager repos \
       --disable='*' \
       --enable=rhel-7-server-rpms \
       --enable=rhel-7-server-supplementary-rpms \
       --enable=rhel-7-server-rhv-4.3-manager-rpms \
       --enable=rhel-7-server-rhv-4-manager-tools-rpms \
       --enable=rhel-7-server-ansible-2-rpms \
       --enable=jb-eap-7.2-for-rhel-7-server-rpms
      
      # check for updates
      engine-upgrade-check
      
      # assuming it returns positive (otherwise, stop here)
      yum -y update ovirt\*setup\* rh\*vm-setup-plugins
      
      # run engine-setup to update the system, more or less, accept the defaults (no 
      # need to do backups of the databases) and let it do it's thing
      engine-setup
      
      # once done, update the reminaing OS packages
      yum -y update
      
      # if you're planning on updating the hypervisor, shutdown RHV-M
      shutdown -h now
      
      # if your not updating the hypervisor, reboot if a kernel update was applied
      #reboot

      And, finally, update the hypervisor.

      # make sure the RHV-M VM is down
      hosted-engine --vm-status
      
      # update packages in the normal way
      yum -y update
      
      # reboot
      reboot
      
      # when the host comes back up, reconnect via ssh or console
      # the below command will take a few minutes to actually work.  at first it will spit out
      # errors about how it can't connect to storage and to check a few services.  You can
      # view the logs for them, etc., but...for me...it usually takes about 5 minutes
      # before it responds correctly (with a VM down message)
      hosted-engine --vm-status
      
      # once it's responding, restart RHV-M
      hosted-engine --vm-start

      Give the RHV-M VM a minute or two to start up, then browse to the admin portal: https://hostname.domainame/ovirt-engine/webadmin/ .

      Since there is only one node in the cluster and no chance for RHV-M HA, there’s no harm in leaving the SHE cluster perpetually in maintenance mode. If you feel the need, remove it from maintenance mode using the command hosted-engine --set-maintenance --mode=none from the hypervisor host.

    6. Configure the RHV environment

      At this point you should be logged into the RHV-M admin GUI interface and be greeted by the (mostly empty) dashboard. Your one host should be added to the default datacenter and you should have a storage domain (named whatever you specified during the install, hosted_storage by default).

      Let’s finish configuring the RHV deployment. At a minimum, this will mean…

      • If needed, configure additional physical networks.

        If you need to configure additional physical adapters (standalone or bonds) for VM, storage, live migration, etc., now is the time to do so. Browse to Compute -> Hosts and click on the name of the host, then select the “Network Interfaces” tab and, finally, the “Setup Host Networks” button in the upper right.

      • If needed, configure additional logical networks.

        A default ovirtmgmt network will have been created that is capable of placing VMs onto the same network as the management interface. If you need to add additional configuration (e.g. VLANs), browse to Network -> Networks and add them. Once the network(s) have been defined, browse to Compute -> Hosts, select the host (click the name to view details), and browse to the “Network Interfaces” tab. Click the “Setup Host Networks” button in the upper right to adjust the network config by drag+drop the logical network to the physical configuration. Once done, click ok to apply.

        Note that if you adjust the ovirtmgmt network, there may be some flakiness when applying multiple changes in one commit. Simply avoid adjusting it in conjunction with other changes.

      • Add the second storage domain.

        Browse to Storage -> Domains, click the button for “New Domain” in the upper right. Fill in the details for an NFS domain (assuming you followed the instructions above) at /mnt/rhv_data . Give it a creative and descriptive name like “rhv_data” so you know it’s function!

      • Enable overcommit.

        By default RHV won’t overcommit memory. To fix this, browse to Compute -> Cluster, highlight the cluster ( Default , by default), and click the “Edit” button. Browse to the “Optimization” tab, then set “Memory Optimization” to your desired value. I also recommend enabling “Count threads as cores” and both “Enable memory balloon optimization” and “Enable KSM” (configured for “best KSM effectiveness”) on this same tab.

      • Optionally, remove Spectre/Meltdown protection.

        You may want to remove the IBRS Spectre/Meltdown mitigations if you are willing to trade less security for more CPU performance. If so, browse to Compute -> Cluster, highlight the cluster (by default, Default ), and click the “Edit” button in the upper right. On the general tab, for CPU type, choose the latest generation supported by your CPU which doesn’t have IBRS SSBD (for Intel) or IBPB SSBD (for AMD) in it.

      • Verify there’s no conflicts with MAC address ranges.

        If there is more than one RHV deployment on your network, verify that they aren’t using the same MAC address ranges for virtual machines. Browse to Administration -> Configure, then choose the “MAC Address Pools” tab. Click on the default pool and press the “Edit” button in the top of the modal. Check the range against any other instances and adjust if needed.

    Some closing thoughts

    • Uploading ISOs / templates can be done via the GUI, but you’ll need to download the CA and trust it before it’ll succeed. To download the CA bundle, browse to https://hostname.domainname/ovirt-engine/ and select “CA Certificate”, on the left side under “Downloads”. Once downloaded, add it to your keychain and trust it as needed.

      To upload an ISO, browse to Storage -> Disks, then choose Upload -> Start in the upper right corner. Click “Test Connection” in the lower part of the ensuing modal to verify that it will work. Assuming the test passed, choose the ISO and the storage domain you want it to land in, then click OK.

    • Console access is, arguably, easier using noVNC vs SPICE with VirtViewer…and is definitely easier if the host is not directly accessible by the client. For each VM, after it’s powered on, highlight the VM in the Compute -> Virtual Machines view, then select the dropdown for “Console” in the upper right and choose the “Console Options”. Select the radio button for “VNC” at the top, then “noVNC” brlow. Click OK. When opening the console, it will now open in a new window/tab using the HTML5 noVNC client.
    • Apply updates will require all of the VMs to be shutdown. It’s not required, but I find that it eliminates some issues with waiting for the storage domain during reboot if you put them in maintenance mode first. To do so, browse to the datacenter (Compute -> Datacenters), click the datacenter name, then in the Storage tab, highlight the storage domain and click the “Maintenance” button in the upper right.

      After that, update RHV-M and the hypervisor OS just like is described in the step above.

    The post Red Hat Virtualization All-in-One appeared first on The Practical Administrator .