Skip to content

Blog

X11 Xorg.log amdgpu “no screens found” when a non-graphics card is in the primary PCI Express slot

I bought a used LTO4 tape drive with a 8088 SAS connection. Why?

For fun, for backups that feel like they might be more resilient than the shingled magnetic recording hard drives I accidentally bought (thanks Seagate for disclosing that), and for the enjoyment of something so wonderfully mechanical in a world that is very “solid state”.

This necessitated a SAS card purchase, to give myself the ports necessary to actually plug in the tape drive. It seemed unhappy with one of my PCI Express slots, so I moved it up to the primary PCI Express slot — the one you’d usually use for a graphics card.

Now this Arch Linux machine has no need for fancy graphics. The APU integrated graphics on the Ryzen 7 5700G are perfectly adequate.

However, once the SAS card was in the primary PCI Express slot, X11 would no longer start. My SAS card showed up beautifully with lspci, as did the tape drive with lsscsi, but I had to sacrifice the GUI for it. Seems a little extreme, even for me.

X11 would fail with “no screens found” when the amdgpu driver was enumerating screens.

The integrated graphics moved PCI ID

What had happened is that once something is in that primary PCI Express slot, the integrated graphics moved their PCI bus ID.

I first identified where the “VGA controller” had gone with lspci:

08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series] (rev c8)

Then I edited /etc/X11/xorg.conf.d/amdgpu.conf to point the BusID at that new identifier.

For me, it had moved from PCI:7:0:0 to PCI:8:0:0.

And now, I have the delight of a GUI and a SAS card, and a tape drive.

The future of password management, for me

In a number of ways, I have become a dinosaur. One of those ways is an insistence that my password management solution has one of its layers of security being the fact that the data is not in the cloud. There is nothing on someone else’s commodity service, and, thus, no data to potentially be compromised in bulk as a result of an attack on a common service. Those services are probably pretty secure, but they are also huge, concentrated targets.

I have a really high standard to meet for the security of password management for my personal computing. That is how I feel comfortable.

So the march of password managers into the cloud presented me a problem. To move forward and to maintain my investment became incompatible with this principle.

Additionally, I became more and more disillusioned with macOS as a desktop platform. Increasingly, maintaining control over what the system is actually doing became impossible. (Why is the News app refreshing in the background when I have never opened it on this brand new install??) So the new solution needs to be native to the Linux desktop.

So my requirements discounted me from my long-time password management app 1Password (clearly moving towards cloud only) — a migration of some kind was in order.

The Options

BitwardenInteresting. Open source.
Yes, you can self host, but for syncing purposes you are still exposing a host to the whole web, presumably, which centralises all that data and you would likely need to have this in the cloud.
PassCommand line based, and built around GnuPG. I got somewhere with this, but ultimately found myself wanting a bit more of a GUI for managing and sorting the password data.
KeePassXCProvides a desktop GUI app with the categorisation and management I am used to, with import capabilities from 1Password. Locally syncable (albeit not bidirectionally with great ease) with Strongbox on iOS.

I have found myself with a combination of KeePassXC and Strongbox on iOS.

I do sacrifice some convenience on the desktop with browser integration, as I have not yet installed the browser extension for KeePassXC. I would love it to be in Mozilla’s “Recommended” category, where they review the extension on an ongoing basis. I trust the KeePassXC developers are not malicious, but there are lots of risks with browser integration that I don’t fully understand the implications of — the boundary from the external app into the browser context affects the principle of data isolation in ways I haven’t studied.

So, KeePassXC + Strongbox + local network syncing of data is where I have landed.

And because supporting the projects that we depend upon is important, I have paid for the premium version of Strongbox and made an annual donation to the KeePassXC project. I am one of a decreasing number of people who will want to maintain this level of control, so, given that I am fortunate enough to be able to, providing the resources to keep this alive is something I wanted to do.

Installing OpenWRT 19.07.2 on TP-Link WA801ND v5

I recently had some success installing OpenWRT 19.07.2 on my TP-Link WA801ND v5 wireless access point, which I was happy to do since the stock firmware was untouched since 2017.

I’ve documented the process in a video. This isn’t a tutorial with absolutely everything covered — I don’t want to encourage people to brick their devices, so you’ll need competence and proficiency with configuring your network settings in your operating system, and running a TFTP server.

It’s all, obviously, at your own risk. Stay safe and don’t brick your only access point!

Sorry for the inconsistency in audio quality.

Provisioning Raspbian with WiFi and SSH

Raspberry Pi logo

UPDATE: This does not work since Raspberry Pi OS based on bullseye. sdm might be a good alternative approach.

I’ve been playing with some Raspberry Pi Zero W machines for a few projects. They are inexpensive, with a form factor that makes for all sorts of interesting possibilities.

They are also a pain to initially provision, because you need adapter cables for mini-HDMI and micro-USB to get a monitor and keyboard connected.

With some help from this post from Linuxconfig.org, here’s what I’ve been doing to get the Pi Zero W up and running, on the network and ready for SSH access without having to plug anything in except power and the SD card!

Mount the image

$ fdisk -l *raspbian.img 

The initial FAT /boot partition lies at 8192 sectors in. A sector is 512 bytes. So, now we’ll mount the /boot partition using the loop pseudo-device.

$ mkdir boot
$ mount -o loop,offset=$(( 512 * 8192)) *raspbian.img boot

Set up SSH

Let’s go into our new boot subfolder where the image’s first partition is now mounted.

All we need to do to enable SSH is create an empty file called ssh.

$ cd boot
$ sudo touch ssh

Configure WiFi

To configure WiFi, we’ll need to drop a wpa-supplicant.conf file in this folder. Upon first boot, Raspbian moves this into the correct config location and the Pi will be able to talk to our network out of the box.

Create a file called wpa-supplicant.conf with these contents:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=GB

network={
	ssid="Your network name"
	psk="Your network password"
	key_mgmt=WPA-PSK
}

You’ll need to replace the placeholders with your network name and password, and possibly change country too.

Unmount the image

$ cd ..
$ umount boot

Write the image

Insert the target SD card and verify which device it is using fdisk.

Take great care to ensure that the command below is run with the correct device as the target, or you’ll overwrite a hard drive on your local system.

$ sudo fdisk -l

Disk /dev/sdb ...
Model: Card Reader

I am confident that sdb is correct, because I see from fdisk that the model is Card Reader.

$ sudo dd if=*raspbian.img of=/dev/sdb

Security

Remember that once you’ve booted, you’ve just spun up a Raspberry Pi on your network with the default username of pi and the default password of raspberry. Check your router or network logs for the assigned DHCP address of the new Pi, log in promptly over SSH and change that password!

Red Hat Certified System Administrator

This was a whole month and a bit ago, and I never got around to throwing it here on the site, but I will now announce that I am a Red Hat Certified System Adminstrator!

QuickArchiver on Thunderbird — Archiving Messages to the Right Folder with One Click

QuickArchiver icon

Even despite the dominance of webmail, I have long used a traditional desktop email client. I like having a local mail archive should “the cloud” have trouble, as well as the ability to exert control over the user interface and user experience. (That might be partly a euphemism for not having to see ads!)

Apple’s Mail.app built into macOS (going to have to get used to not calling it OS X!) has served me pretty well for quite some time now, alongside Thunderbird when I’m on Linux, and while Mail.app offered the most smooth interface for the platform, it didn’t always have all the features I wanted.

For example, being able to run mail rules is more limited than I wanted in Mail.app. I could have rules run automatically as messages arrived in my inbox, or disable them entirely. But actually how I wanted to use rules was to be able to cast my eye over my inbox, and then bulk archive (to a specific folder) all emails of a certain type if I’d decided none needed my fuller attention.

Recently, I moved to Thunderbird on my Mac for managing email and discovered QuickArchiver.

As well as letting you writing rules yourself, QuickArchiver offers the clever feature of learning which emails go where, and then suggesting the right folder to which that message can be archived with a single click.

It’s still early days, but I am enjoying this. Without spending time writing rules, I’m managing email as before, and QuickArchiver is learning in the background what rules should be offered. The extra column I’ve added to my Inbox is now starting to populate with that one-click link to archive the message to the correct folder!

It’s just a nice little add-on if, like me, you (still??) like to operate in this way with your email.

Cautious Unattended Upgrades

I’m very excited to have put the ideas mentioned in my previous blog post about Cautious Unattended Upgrades into practice!

To quickly recap, the idea is that, on a Debian-based test system (‘the canary’), this is a software package that runs the latest security updates, runs an automated browser-based test suite to make sure these new updates have not broken any critical functionality on our clients’ sites, then ‘pushes’ just these package updates to the production servers.

In keeping with my original plan, the software is written in Ruby and uses Watir/Selenium WebDriver to run a suite of tests that verify, just as a human being would in a live web browser, that client websites work correctly.

A canary — as in ‘the canary in the coal mine’

Cautious Unattended Upgrades — the canary in the coal mine. Image by stevep2008 on Flickr, licensed under CC-BY 2.0.

I was expecting the biggest challenge would be getting this browser automation side of things working, but actually that proved very easy, which is a testament to the design of those projects.

The software is still a little rough around the edges, as I explain in the README file on GitHub, but I’m very pleased with the project’s progress. We have put it into use on our live systems at Van Patten Media, so we can keep servers promptly up-to-date with security patches without our intervention, but retain a greater peace of mind that our clients’ sites are still working as they should post-upgrade. (This is of course dependent on the quality and breadth of the tests that we write!)

I am particularly excited that this marks the first ‘real’ project in Ruby that I have written. Ruby isn’t a platform I have worked with too extensively before, so I have enjoyed challenging myself to be exposed to a different environment and quickly pick up how to achieve what I want to do. There is definitely more work to do — it really should be organised in a slightly more ‘Ruby-like’ way, and perhaps become a proper Ruby Gem, listed on rubygems.org, so those are things I will be looking at over the longer term for this project.

If you are interested in using Cautious Unattended Upgrades, or contributing to making it better, the project is licensed under a BSD-style licence and the code is available on its GitHub project page.

Automating Security Updates… Cautiously

Broken padlock

Effraction by Sébastien Launay on Flickr. Licensed under CC-BY 2.0.

My attention has turned recently to how to automate the installation of security updates on various Linux distributions.

As Van Patten Media runs more servers, the effort and time needed to apply critical security updates promptly grows. Waiting several days to get security fixes just isn’t acceptable in a post-Shellshock era, yet there is always a risk of a completely automated update breaking important functionality.

One of the projects I will be investigating over these few weeks is how we might build an automated test environment that could apply the updates quickly to a test VM, run a test suite to verify none of our critical client functionality breaks, then push those updates to the live servers.

There are various solutions for truly automatic updates; I focus on Debian’s unattended-upgrades package here. What seems to be more difficult, however, is being able to push that list of ‘approved’ packages and just install those when we are ready.

My current train of thought on how to proceed on this is as follows:

  • Test box installs the day’s upgrades
  • Runs the test suite automatically
  • If the test suite passes, it determines which packages were most recently installed
  • Pushes that/those package names to the unattended-upgrades whitelist on the clients
  • Clients, on next unattended-upgrades run, will install those upgrades
  • Upon successful upgrade, we reset the whitelist

I am in the very early stages of looking at this, so that is a very rough sketch of where my thoughts are currently. There are missing pieces, but I was looking at Watir for the browser test suite component.

I would be interested to hear from anyone who has looked at this before, or if anyone knows of any interesting similar projects I haven’t found!

Adventures in SELinux

Security Enhanced Linux (SELinux) is a pretty powerful technology that adds another layer of access control to a Linux system. It helps significantly limit the ability for an attacker who has partly compromised a system to use their access to jump deeper into the system.

It has been standard in Red Hat Enterprise Linux and its derivatives for quite some time, and is often the cause of many a headache when something doesn’t work because it is being (apparently) silently blocked by SELinux’s security enhancements!

Its potential to cause breakage, especially when third-party bits and pieces are brought into the mix, means the advice from well-meaning individuals is often a cry of “just turn off SELinux!”, rendering a system without that extra layer of protection.

I will not pretend that my recent dealings with SELinux in CentOS 7 have been free of frustration, but a few simple tools have made it a surprisingly simple affair to get something up and running again if a particular behaviour (always of something a bit third-party in my experience!) is being erroneously blocked.

I think a big part of what makes SELinux get switched off in frustration is the perception that it is breaking things silently, and the psychological impact of its verbose ‘scariness’ when you do find those logs!

audit2why

As long as you remember that SELinux can be the cause of potential unexplained weirdness, your first port of call can be audit2why:

audit2why -a

What is particularly nice about this tool is how quickly you get (semi-)human readable output, detailing which rules an application is breaking. If you do hit one of these ‘weird problems’, a quick trip to this tool usually makes it clear that SELinux is the cause of the failure!

audit2allow

I was surprised by how relatively quick it was to identify an issue with audit2why and make a custom module with audit2allow to get an application working again.

There are a good set of instructions in the Red Hat manual.

It sounds like a big deal, but the tools have made it almost completely automated — it really isn’t necessary to have a deep understanding of SELinux’s internal workings.

setsebool

Finally, there are some flags that are disabled by default for SELinux protected applications that you might need. Again, audit2why will often make it clear what you need to toggle using this tool!

For example, a web server process that does legitimately send out emails might need the appropriate flag switched on. Without it, the web server doesn’t get the right to talk to sendmail.

Give SELinux Another Chance!

I suppose my point in this rambling is this: give SELinux a chance if you have given up on it in the past. If you have the time to set up your system properly (and you should!), taking a little extra to figure out how to grant only the permissions really needed does make a material difference to your security should one application be compromised. An attacker being able to get their foot in the door needs to be assumed to be possible, so making their life a lot harder at that point is worth making your life slightly harder on the odd occasion.

With a little patience and the use of the tools I have talked about, I think it is a lot easier to work with than it might seem at first glance, or when it first arrived in RHEL many moons ago!

Piwik for Web Analytics

Google Analytics is almost ubiquitous as the solution for collecting useful information about how your website is being used by visitors. It is a good product, and has evolved over the years to be very flexible indeed.

But since it first launched, my opinions of Google have certainly changed, as have many others. Without wishing to get into a debate on the subject, there definitely is a market for a competitor to this very useful tool that might free us of that reliance on Google infrastructure, and be more respecting of our visitors, by means of initatives like Do Not Track.

Piwik screenshot

Piwik’s desktop and mobile views

I recently deployed Piwik, an open source PHP-based application intended to replace Google Analytics. A full disclosure — it will not be as full-featured as Google Analytics for those people using the full power of that solution, but it puts the power and control back in your hands. Moreover, it uses a very similar-looking (perhaps even largely compatible) JavaScript API, meaning I had to do little work to figure out how to track the events that I wanted.

With built-in support for avoiding the use of cookies altogether, you can sidestep the well-meaning, but ridiculously ill-conceived EU Cookie law and its onerous “we use cookies!” notifications entirely, while still delivering enough tracking capability for many simpler analytics applications where detailed insights into repeat visits aren’t so important.

I haven’t made the time to replace Google Analytics on this site with it yet, but that is on my list of things to do! Right now, I have some custom code server-side that detects your Do Not Track status and suppresses the Google Analytics JavaScript entirely, but Piwik would do away with that need for complexity.

It might not do enough for your application — but as a way to put your money where your mouth is and genuinely support the user’s right to give and withdraw consent for tracking, it is most definitely worth a look.