Skip to main content

Flock2017: day 1

Day 1 started out way too early. 7am (which feels like 4am for me!). After a quick breakfast downstairs, and run by registration I made it to the ballroom just in time for opening announcements then our Glorious Fedora Project Leaders "State of the Fedora" talk. The room was pretty packed actually, and as allways Matt had some pretty charts and graphs. I found the premise interesting: That Fedora lives on the innovation curve at the very front left. A small bit behind the very bleeding edge, and back to where it starts getting boring. If thats true (and it seems likely), then we always need to be pulling in the very new stuff from one side and integrating it. So the state of things now is a bit on fire, but in a good way. After that was the group picture (interested to see how it turned out this time) and then everyone running a session got a minute (or less) to pitch their session and try and get folks interested in going to them. Then, on to sessions:

  • Lunch: Not bad. Sandwitches on the deck looking over the resorts gold course.
  • Pagure hackfest: There were not many of us, but we made some progress. Pingou and I fixed the incoming email processing on pagure.io so emails can comment.
  • State of the Fedora Server: A quick overview of server and what is going on. Amazingly we ran out of time. If folks still have more questions, please ask on the server list.
  • Fedora Legal: this is why I drink: a great session as always from Spot.
  • Lots of "hallway" discussion and talk on all kinds of topics.
Then time for games night + dinner (pizza) and beer. The evening ended with us hanging out in the hotel bar and chatting on all sort of topics.

Flock2017: day 0 (and -1)

It's almost time for flock 2017. This year it's in Cape Cod. Also this year I am heading there not from Denver, but from Salem, OR. My best option for flights this year was a red-eye from Portland to Boston, then the bus out to Hyannis. It was a long, but uneventfull journey: 10:45pm out of Portland, 7am arrival at Boston, and 10am arrival at the Hyannis bus station. I then started the walk (about a mile) to the hotel, but turns out I ran into some Fedorans I know, and they had a car, so I had a nice ride to the hotel. The hotel is pretty large, but it's pretty weird to get to/from our rooms as you have to go up and down stairs and around. It's definitely past it's glory days, but still seems reasonable to me. After making the hotel, some hacking in the lobby and visiting and then lunch and dinner at some local main street places (burger and beer and then italian for dinner). After the long day, I was asleep before 10pm. :) Tomorrow: Day 1!

Public Service Announcement: Fedora 26 and delta rpms

Just a quick public service announcement: Delta rpms (The much smaller files that contain just the changes from one package version to another) are currently not working for Fedora 26 (all other releases should be working fine). They actually were not fully working before, but the problem wasn't detected fully until a few weeks after the release. The issue is that we now push out a bunch of alternative arch builds in the same updates as before and we place those in another location on the master mirrors. This means that mash and bodhi (The things that make the updates repos/updates) need to know where to look for the older package rpms in order to make delta rpms against the new ones, but currently we just pass it one location for everything. We hope to fix this soon, but if you don't see delta rpms in F26 (yet) this is why. EDITED: As of 2016-09-01 we have F26 delta rpms fixed. Everything should be back to normal. Thanks for your patience.

Rawhide notes from the trail, the long road edition (2017-08-24)

Hey look, it's been exactly 30 days since my last rawhide post. What a ride it's been this last month: The mass rebuild was delayed a bit due to tooling issues, then releng availablity, but finally happened around the beginning of the month. Then, sadly, another one was called for due to a binutils issue on ppc64. Luckily this was just all the archfull packages (the noarch ones were unaffected and could be reused from the first rebuild). There have been a bunch of issues plaguing composes, including, but not limited to:

  • rdma-core replacing a number of other packages and needing lorax to adjust to it pulling perl into the base images.
  • rdma-core not building for armv7 and lorax needing to not try and install it on that platform.
  • Storage backend issues. We are now thinking it's some interaction between Linux kernel and NFSv4.1. But the slowness has made composes and new build repos very slow sometimes.
  • New rpm that switched to sha256 headers breaking signing because old rpm didn't correectly read the new headers (in an update this has been fixed now).
  • New rpm needing a bunch of things that linked to it rebuilt.
  • After the mass rebuild, all f28 needed signing with the new f28 key. (took a few days, but done).
Due to all that we have had many fewer actual finished composes than we normally had in the past (now that pungi will fail a compose if any required part fails). I'm not sure if thats making things better or worse, but hopefully they will get better. It does mean when we do get a compose there's a better chance of it being usable, but also when we don't theres no new repos for people to use day to day. And tons of new things landing:
  • Bunch of changes to debuginfo. Now there are split debuginfo per subpackage and sources are all in a debugsource package.
  • Kerberos KCM is setup by default.
  • Tons of new versions of things: rpm, glibc, gcc, kernel, etc.
F27 branched off rawhide to go it's way to release, and f24 went end of life (We thank it for it's long and valient service). Next week is flock (Fedora's big yearly conference). I look forward to seeing my Fedora family and working on tons and tons of things. (This years theme is more 'do' than 'talk'). Hope to see many of you there!

rawhide notes from the trail, the 2017-07-24 edition

Greetings! Once again it's been a long spell since one of these posts, so lets jump right in and rope that calf. Rawhide has been marching along to the next branching point, when Fedora 27 will branch off for it's release. There's a mass rebuild that should be happening very very soon now, there was a bit of delay while tools were all sorted out. Look for a very very big rawhide update soon once that mass rebuild finishes. All the alternative arches are now in the same koji. We finally got s390x added in and going. Initally it was only 5 builders with 4GB ram, so things got stopped up from time to time, as all it would take was 5 long builds to show up and everything else waited for them. A few weeks ago we increased things to 15 builders and 8GB of ram each so hopefully that will keep up with builds as we go. We will see what the mass rebuild looks like with all the alternative arches included this time. I suspect it's going to take longer than before, but not sure how much longer. Thanks to Patrick you may also notice that uploading sources to our lookaside cache is no longer uploading them twice. Instead it's using the kerberos cache to authenticate on the first go so it just needs to upload once. We have missed some composes in the last few months (once for quite a while), and this is due to the compose process now failing the compose if all the release blocking deliverables are not there. On one hand thats nice because it means we always have those if the compose finishes, on the other it's not so nice because if it fails we need to go fix whatever breakage is there. Look for a new note soon, and ride safe out there!

CI and infrastructure hackfest 2017

I was hoping to write a bit of a summary for each day of the CI and Infrastructure hackfest last week, but there just wasn't enough time to sit down and write up blog posts. Each day we started bright and early gathering in the hotel lobby at 8am, got to the Red Hat tower, got some breakfast and got to work until 5 or 6 and then we went and got dinner and back to the hotel to do it all over the next day. We definitely had some excellent discussions and got a lot of work done. First a few thank yous: Red Hat provided us the use of the fittingly named "Fedora" room on the 9th floor to work in. It was a perfect size for our group. I think once we had to scare up an extra chair, but usually we fit just fine. OSAS (Red Hats Open Source And Standards group) funded travel and lodging for folks. Paul Frields (stickster) handled logistics and tried to keep us all scheduled, on track and in general cat herded in the right direction. We had 3 BIG goals for this hackfest and a bunch of little ones. I think we did great on all counts. First on larger goals:

  • Monday we got a detailed dump of information about our authentication systems, past, present and future. Both from a perspective of sysadmins wanting to manage and fix things and application developers wanting their app to authenticate right. The high level overview is that we have FAS2 currently as the source of all our authentication knowledge. It syncs (one way) to freeipa (a pair of replicated masters). This sync can sometimes fail, so we now know more about what to do in those cases from the sysadmin side. Then we have ipsilon that manages openid, openid-connect and (used to) handle persona. We got some detailed workflows for each of these cases. Moving forward we want to get apps using OpenID-Connect. Down the road we talked about replacing fas with a very thin layer community access api app. Not sure thats going to happen, but might be an interesting way to go.
  • We wanted to harness the CentOS CI pipeline for testing a subset of optin packages from Fedora. I wasn't directly working in this area, but I know by the end of the week we had CentOS-CI sending fedmsgs in our staging env and had setup a ci instance near it with resultsdb and taskotron to manage tests. I think there's some more hooking up of things to go, but overall it should be pretty cool.
  • Finally we wanted to look into and setup our own OpenShift instance. We had some very nice discussions with Red Hat OpenShift folks who manage and deploy things bigger than we can imagine. They gave us some very helpful information. Then we talked out initial policy questions and so forth. By friday we had a OpenShift cluster up and running in our staging env. We still need to get some official certs, sort out the ansible managing applications setup end, and figure out what we want to do for persistent storage, but otherwise we made a vast amount of progress. You can find out questions and answers at https://fedoraproject.org/wiki/Infrastructure/OpenShift
On smaller items:
  • We met with the Red Hat Storage folks who manage our backend storage. There's some nice improvements coming along later this year if we can adjust things to take advantage of it. Mostly that means splitting up our gigantic koji volume into a 'archive' and 'active' volumes and splitting our ftp volume into a 'active' and 'archive' volumes. Then we can hopefully move the active volumes over to flash storage, which should be a great deal faster. All still up in the air, but promising.
  • We met with the folks who manage our main datacenter, and it's looking possible that we will be moving all our stuff to a new area a short distance away. If this comes to pass, it would be later in the summer and there will likely be some downtime. On the plus side we would get a chance to organize machines better, get new racks and better power and all around be in a better place moving forward.
  • We had a nice discussion around making bodhi faster. I've been over that many a time, but we did come up with a few new ideas, like generating drpms at update submission time instead of at mashing time (but that would need createrepo_c changes). Or perhaps triggering pushes only on critpath or security updates and other leaf node packages would just wait.
  • There were a number of discussions around factory 2.0, modularity, branching, koji namespaces, etc.
  • We had several discussions on postgres BDR (Bi directional replication). I've moved a number of our apps to it in staging and hoped to roll it out in production, but there were some concerns. In the end we decided to look at deploying it and then deciding on a app by app basis which ones were ready to move over. In the end we hope to have everything using it, but some apps need time to make sure they follow all of BDR's rules and do the right thing. Additionally some apps may want to use BDR, but in sync mode to avoid possible bad data on a node crash. koji upstream needs to support things before we can move koji, but some of our smaller apps may be able to move soon. We also decided we wanted a script that could detect problems by looking at an applications schema. Pingou already went and had this done by the end of the week.
  • Tim Flink (Fedora QA) and I had a nice discussion about scaling the existing QA setup. We agreed that it might make good sense to see if we could migrate this into our cloud, provided we can get it up on a modern version we could actually support. That is likely to happen in the next few months as we have some new hardware for it and are retiring some of the old hardware, so it's a good time to force an new setup.
  • We went to the ansible offices and had beer and pizza and watched a baseball game. :)
All in all a super productive week. Look for lots of the above to come up in meetings, tickets, mailing lists and so forth as we share plans we made, and make sure everyone is on board with them or if we need to adjust anything.

CI and infrastructure hackfest day -1

Ah travel, such fun, especially these days. Happily things went pretty nicely... security was not a hassle, the flight was a bit early and went well and no problems meeting up with folks once I landed. Then off to a lovely dinner and sleep in time for the hacking to begin tomorrow.

CI and Infrastructure hackfest 2017 next week

Tomorrow I'm traveling out to Raleigh, NC for a gathering to work on CI and Infrastructure for Fedora and will be out there all next week. We will of course be around on IRC and hope to pull in remote folks that are interested in participating, but if you need us for something and can't find anyone, please file a ticket and we will get back to you as soon as we can. https://fedoraproject.org/wiki/CI_and_Infrastructure_Hackathon_2017 has a list of the things we hope to work on, but a short summary:

  • Get a bunch of information from Patrick on OIDC (Open ID Connect, basically the current spec for OpenID), both for application developers who might need to interact with it and sysadmins who need to manage it.
  • Work on a bunch of items related to Continuous Integration in Fedora, koji and bodhi integration and being able to use the CentOS CI hardware to run more tests.
  • Figure out a plan to setup a OpenShift instance in Fedora Infrastructure. This will help us deploy some new apps that expect that sort of setup as well as look at moving some of our apps to this new and exciting workflow. We have a big list of stuff to figure out at: https://fedoraproject.org/wiki/Infrastructure/OpenShift
  • Some misc other discussions about database replications and other things that high bandwith talks would help.
It should be a fun and busy week... I'm going to try and summarize each days discussions here, either daily or at the end of the week.

Rawhide notes from the trail, 2017-04-01 (no foolin!)

Just a few quick notes for those folks riding along the rawhide trail: There's some pretty bad brokenness in the polkit package that landed in rawhide on thursday. Your best bet is to downgrade to polkit-0.113-7.fc26 until it gets sorted out. Symptoms include not being able to login with gdm, lots of things being broken because they cannot start, etc. This is being tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1438086 FESCo approved the Fedora 27 schedule not log ago: https://fedoraproject.org/wiki/Releases/27/Schedule and so the next mass rebuild will happen 2017-07-12. Mark your calendars. As many of you may know, I switch off between gdm/Gnome and lightdm/Xfce pretty often. I finally tracked down one annoyance that has been hitting me for a while: On login to Gnome I was getting 2 polkit dialogs every time: asking for perms to handle rfkill and networking. Turns out this was due to blueman autostarting in Gnome and not having those permissions. There was no reason for it to need to start there since Gnome has it's own built in bluetooth, so I filed: https://bugzilla.redhat.com/show_bug.cgi?id=1432555 on that. Workaround is to add a NotShowIN Gnome for it's autostart file. Until next time, ride safe!

MUA++ (or on to thunderbird)

So, a few weeks ago I moved my MUA (Mail User Agent) from claws-mail over to evolution. Last week I decided to move on and give thunderbird a try. Mostly evolution worked, but it had some bugs I hit that were quite annoying: From time to time it would duplicate emails on my work imap server. Suddenly I would get 50 copies of some list post. I could of course select them and 'delete duplicates' but this was pretty anoying. I tried all sorts of tuning to get it to not do that, but nothing seemed to work. Additionally I found the keyboard shortcuts difficult to get used. So, thunderbird. For this I needed to make a change I had been meaning to for a long time, but kept never getting around to: I needed to switch to using my home server's imap server instead of delivering email to my laptop (thunderbird only does imap for incoming emails). Fortunately, it was easy to just change my .procmailrc to deliver to the server and serve it via imap. However then I ran into some real confusion: I had setup my server (dovecot) a number of years back to provide 2 'namespaces' in imap: The first would be a Mailbox that it would deliver email to, and the second would be a Maildir that it used for folders (this was due to having some friends using my mail server who insisted on using mail clients via shell that didn't understand Maildir). I had to do a bit of tweaking to get it working for me, but not breaking it for others. The mbox namespace also meant there was a mail directory with mailbox folders and if you were not careful how you set things up you would get those (mailbox is a good deal less permofrmant than Maildir, so I wanted to avoid them). Finally, I got it all working there. So, after now using thunderbird for a week or so, the good:

  • thunderbird has no problems talking to various imap servers. No duplicate emails, no errors, everything works nicely and pretty quickly.
  • lightning plugin is now built in/included in thunderbird and it has had no problems talking to all my vaious calendars.
  • enigmail seems to do a fine job with encrypted emails and signing my outgoing emails.
  • The keyboard commands seem a lot easier to get used to, and with the Nostalgy extension it's pretty easy to file emails and go to places.
  • The search features seem very fast and work well. I 'star' mails I want to deal with later, and have a search folder that shows all starred emails. I can from there easily open a tab with the entire conversation if I want to read the thread the email was in again.
  • There's a handy sort by 'Grouped' thats nice for some things. It will show you for example todays emails and let you expand previous days if you like.
The bad:
  • I cannot quite seem to get the message view to look the way I want. It seems to change what fonts it uses sometimes based on "I am not sure what". Possibly if the email is html only? Will keep looking into it.
  • I had to enter all my stupid filtering rules _again_. I just redid them for evolution, but now again for thunderbird. I really need to look into sieve and just doing it on the server. There outta be a standard!
and the things that are just related, but not directly thunderbird:
  • My mail is now in imap on my main server and I can read it via thunderbird, or roundcubemail.
  • I've unsubscribed or otherwise removed myself from a bunch of lists or things that were sending me email that I never read anymore or cared about. There's still some more of these to go, but its good every once in a while to drop all your filters and rebuild them to see what should just never come in at all.
Will I stick with thunderbird? Time will tell. So far this week indications are good, but we will see.