Skip to main content

First week of july 2024 random musings

A bit of a short week this week, as thursday is 4th of july holiday in the US and I am taking friday off, but still a bunch of goings on.

koji was having some bad days on sunday/monday. Turns out that our block_retired script that runs at the start of every rawhide compose figured out that all the packages in epel7 were now end of life/retired and so it started trying to untag and block all of them. Unfortunately this script is normally meant for a few packages being retired and it wasn't really setup well to handle trying to deal with 15k packages at once. Also, we don't actually want it to untag those packages, we want to keep them around for historical reasons mostly. So, I managed to figure out what was going on and stop it, but it had untagged about 10k packages by then. Will look at cleaning this up more next week.

There was some discussion around the new ssh host key on fedorapeople.org. I had failed to announce that it had changed and a few folks (very rightly!) asked if the change was expected. I actually thought about just preserving the old host keys, but they were made 10+ years ago now, so I figured it would be time to generate new ones (and prefer the newer algo also). I did then send an announcement to devel-announce about it. Out of this docs were improved and I tried to push the idea of trusting the fedora infrastructure ssh CA. This just requires you to add the CA to your .ssh/known_hosts and it will trust host keys that are signed by it. This includes all fedora infra hosts. You can of course also use SSHFP, but that requires some ssh settings and confirming that you are using a dnssec enabled resolver, so it's a good deal more work. Anyhow, hopefully this host key will last us at least 10 years too.

There was a bunch more rhel7 vm cleanup that happened. We are still sadly not 100% done on that, but there's only a few left, almost all just internal things and blocked by various things that will hopefully unblock in the next few weeks and we can get them all dealt with too. I think the rhel7 EOL has been harder on us because of the python2->python3 move. Not that I disagree with it, but it's just been a lot of development work to move things up and get them back in a maintained state. Really it's our own fault, but it's hard to prioritize things when they are working fine.

Finally, just a note about next week: I'm off monday, not doing anything fun, but going in to have a molar pulled. So, I am likely to be a bit grumpy later in the week (I can't even have coffee for a while, thats going to hurt).

Fedoraproject doings, last week of june

Hello everyone, long time no blog. :)

For a while now, I (along with adamw and sometimes others) have been posting to mastodon a shortish summary of the things we worked on each day. I'm not sure how useful people find this, but I have seen some indications that some people are reading them. However, 500 characters is pretty limiting. I just have room for most days being pretty terse. So, I thought perhaps it might be nice to start blogging again and perhaps go over the week past with more verbosity and discuss things in more detail than a mastodon toot usually goes into.

This sunday (the 30th), RHEL7 finally reaches end of life. We have been trying to finish up all the migrations away from it over the last few months. Looks like we aren't going to make it 100%, but almost all the things left are internal only and we can clean them up more in the coming week or two. Things always take a lot longer than you might think and some of these things have taken quite a long road, but it's great to see them finally done. A quick list and some thoughts about each:

  • mailman3 / lists.fedoraproject.org: This one has taken a really really long time and the work of a number of folks. All the packaging work to get it up and going in Fedora and EPEL was a heroic effort. Then deploying staging and getting everything worked out for deployment of the new stack and testing everything and tweaking things. Thank you to all who worked on it. It feels great to have a up to date mailman and a up to date OS it's running on.
  • fedorapeople/fedoraplanet: Our fedorapeople server (a instance for contributors to share files and such) also has taken a long road. It had our old planet (fedoraplanet.org) running on it, so we had to move that first. It's now running in openshift on a modern stack and getting feeds to aggregate from our account system instead of files on fedorapeople. After that was finally ready moving fedorapeople to RHEL9 was pretty easy and is now all done.
  • PDC (product definition center) has also taken a long path. We adopted it and it got tangled up in a lot of our processes, then upstream dropped it. We are not 100% done with it yet, but it's very close. The last few items using it should hopefully be moved by monday and we can hopefully turn it off next week. That will be great. It's got a gigantic database, it never updated cleanly, it gets random 500's from time to time and will be great to be rid of.
  • Some smaller internal only things in the coming weeks: 2 virthosts need reinstalling, but they are blocked on some on-site changes. fedimg will be replaced by our cloud-image-uploader service. Thats very close to done. Our github2fedmsg service is being re-written. We need to look at retiring fedmsg entirely to shutdown it's busgateway. Finally, some sundries servers need reinstalling, but I plan to do that today at least mostly.

So all in all not perfect, but pretty close to the deadline. :)

Another thing I looked at eariler this last week was our power9 builders. The buildvm-ppc64le build vm's are currently our slowest builders and maintainers often have issues with them. I updated the 4 main power9 virthosts to the latest kernel and f40 updates, but I am not sure how much it really helped. The main problem on these seems to be that they use 7200 rpm sata drives. Those are just really not very fast and when you get a bunch of vm's hitting the same raid the seek times just kill you. I did play around with some different raid configurations, but it didn't seem to help too much. I've requested more memory and some ssd's for these machines in next years budget, so hopefully we will get that. I am also considering looking at using iscsi from faster storage. In the mean time, sorry they are slow, doing what I can to mitigate that.

In rawhide news, I have been hitting a weird issue with the kernel and my backups. Normally they take something like 5-7min and I can barely notice when they are happening at all, but lately they cause the laptop to ramp all fans to 100%, become completely unresponsive and take something like an hour. It might be this is fixed in the most recent kernels, will need to do some testing this weekend. It's pretty anoying to test because you start the backup and... have to use the hard power off button if you want to get back to working.

Next week is going to be finishing up rhel7 stuff more and then on to reinstalling/upgrading builders to f40 (we couldn't before now because f40 has new createrepo_c which defaults to zstd, and epel7 couldn't handle that, so we couldn't upgrade). Also next week or week after probibly will need to do another mass update/reboot cycle before the f40 mass rebuild later in july. Then, on to flock in early aug.

Some musings on matrix

The fedoraproject has moved pretty heavily toward matix in the last while, and I thought I would share some thoughts on it, good, bad, technical and social.

The technical:

  • I wish I had known more about how rooms work. Here's my current understanding:
    • When a room is created, it has a 'internal roomid'. This is a ! (anoyingly for the shell) followed by a bunch of letters a : and the homeserver of the user who created it. It will keep this roomid forever, even if it has nothing at all to do with the homeserver of the user who created it anymore. It's also just a identifier, it could say !asdjahsdakjshdasd:example.org, but the room isn't 'on' example.org and could have 0 example.org users in it.
    • Rooms also have 0 or more local addresses. If it's 0, people can still (if the room isn't invite only) join by the roomid, but thats pretty unfriendly. local addresses look like #roomname:homeserver. Users can only add local addresses for their homeserver. So if you were using a matrix.org account, you could only add #roomname:matrix.org to any room, not any other server. local addresses can be used by people on your same homeserver to find rooms.
    • Rooms also have 0 or more published addresses. If 1 or more are set, one of them is 'main published address'. These can only be set by room admins and optionally published in the admins homeservers directory. published addresses can only be chosen from the list of existing local addresses. ie, you have to add a local address, then you can make it a published address, a main published address and if it's in your homeserver directory or not. If you do publish this address to your directory, it allows users to search your homeserver and find the room.
    • Rooms have names. Names can be set by admins/moderators and are the 'human friendly' name of the room. They can be changed and nothing about the roomid or addresses changes at all. Likewise topic, etc.
    • Rooms are federated to all the homeservers that have users in the room. That means if there's just people from one homeserver in the room, it's actually not federated/synced anywhere but that homeserver. If someone joins from another server, it gets the federated data and starts syncing. This can result in a weird case if someone makes a room, publishes it's address to the homeserver directory, other people join and then the room creator (and all others from that homeserver) leave... the room is no longer actually synced on the server it's address is published on (resulting in not being able to join it easily by address).
    • Rooms work on events published to them. If you create a room, then change the name, the 'name changed' event is in that rooms timeline. If somehow you ignored or looked at all events before that you can see the state at that time with the old name, etc.
    • Rooms have 'versions'. Basically what version of the matrix spec the room knows about. In order to move to a newer version you have to create a new room.
    • Rooms can be a 'space'. This is a organizational tool to show a tree of rooms. We have fedora.im users join the fedoraproject.org 'space' when they first login. This allows you to see the collection of rooms and join some default ones. They really are just rooms though, with a slightly diffrent config. Joining a space room joins you to the space.
  • The admin api is really handy, along with synadm ( https://github.com/JOJ0/synadm ). You can gather all kinds of interesting info, make changes, etc.
  • When you 'tombstone' a room (that is, you put an event there that says 'hey this room is no longer used, go to this new room', everyone doesn't magically go to the new room. They have to click on the thing, and in some clients, they just stay in the old room too, and if it happened a long while back and people left a bunch, depending on your client you may not even see the 'go to new room' button. ;( For this reason, I've taken to renaming rooms that are old to make that more apparent.
  • There's a bit of confusion about how fedoraproject has setup their servers, but it all hopefully makes sense: We have 2 managed servers (from EMS). One of them is the 'fedora.im' homeserver and one is the 'fedoraproject.org' homeserver. All users get accounts on the fedora.im homeserver. This allows them to use matrix and make rooms and do all the things that they might need to do. Having fedoraproject.org (with only a small number of admin users) allows us to control that homeserver. We can use it to make rooms 'official' (or at least more so) and published in the fedoraproject.org space. Since you have to be logged in from a specific homeserver before you can add local addresses in it, this nicely restricts 'official' rooms/addresses. It also means those rooms will be federated/synced between at least fedoraproject.org and fedora.im (but also it means we need to make sure to have at least one fedoraproject.org user in those rooms for that to happen).

The good:

  • When I used to reboot my main server (that runs my IRC bouncer), I would just loose any messages that happened while the machine was down. With matrix, my server just pulls those from federated servers. No loss!
  • In general things work fine, people are able to commnicate, meetings work fine with the new meeting bot, etc. I do think the lower barrier to entry (not having to run a bouncer, etc) has helped get some new folks that were not around on IRC. Of course there are some folks still just on IRC.
  • Being able to edit messages is kind of nice, but can be confusing. Most clients when you up arrow assume you want to edit your last line instead of repeating it. This is not great for bots, or if you wanted to actually say the same thing again with slightly different stuff added. I did find out that nheko lets you do control-p/control-n to get next/previous lines to resend (and up arrow does edit).

The bad:

  • Moderation tools are... poor. You kind of have to depend on sharing lists of spamming users to try and help others block them, but there's no real flood control or the like. I'm hoping tools will mature here, but it's really not great.
  • Clients are still under a lot of development. Many support only a subset of things available. Many seem to be falling into the 'hey this is like a group text with 3 of your buddies', which may be true sometimes, but the vast majority of my use is talking to communities where there can be 5, 10, more people talking. Little message bubbles don't really cut it here, I need a lot of context I can see when answering people. I'm hopeful this will improve over time.
  • I get that everything is a room, but it's a bit weird for Direct messages. Someone sends you a message, it makes a room, they join it, then it invites you. But if you arent around and the person decides they don't care anymore and leaves the room, you can't join and have to just reject the invite and never know what they were trying to send you.
  • Threading is a nice idea, but it doesn't seem well implemented on the client side. In Element you have to click on a thread and it's easy to miss. In Nheko, you click on a thread thingie, but then when you are 'in' a thread you only see that, not activity in the main room, which is sometimes confusing.
  • Notifications are a bit anoying. They are actually set on the server end and shared between clients. Sometimes this is not at all what I (or others) want. ie, I get 10 notications on my phone, I read them and see there's some things I need to do when I get back to my computer. So, I get back later and... how can I find those things again? I have to remember them or hunt around, all the notifications are gone. I really really would love a 'bookmark this event' so I could then go back later and go through those and answer/address them. Apparently the beeper client has something like this.

Anyhow, that's probably too much for now. See you all on matrix...

Time off when you're paid to work in a community

I thought I would share my thoughts around taking time off when you are paid to work in a community setting. I'm fortunate enough to be paid by Red Hat to work in the Fedora community, and I think the way I approach time off confuses some people (In both the community and the company).

On the company side, the understanding for "time off" (or pto ("paid time off")) is that you will not work on work items and will go relax or do other things you enjoy and then come back recharged and ready to work again. All the companies I have worked for have been good on that understanding, but the problem for me is that "things I enjoy" also happens to include working on (some) things in the community. Sometimes people get very emphatic about people on pto not 'working'. I've even heard suggestions from people of removing access to force people to not work, or being berated for chiming in on something when they are supposed to not be working. When I am on pto, I only do those things in the community that I enjoy. This still lets me relax and have time away from meetings and tasks I don't enjoy but have to do normally, etc. It gives me the choice of what I find relaxing and/or enjoyable.

On the community side of things, things are more murky and depend, I think, on how things are communicated. If you tell the communty "hey, I am going on vacation, I will not be around", most are understanding. I try and convey that I am not 'working' and might or might not be able to do things. The default expectation should be that I am not around, and if I am and help with something, great, if not be patient and someone else will help out, or I will when I get back. Of course when folks expect you to be around all the time and you suddenly aren't that can cause some extra pings and communication.

Asus Hyper m.2 Gen 4 card review

Last year I upgraded my main server from an old 1u cloud box to a new Ryzen Quiet PC. The motherboard has 2 nvme slots on it and I filled them up and mirrored them for the OS and vm storage, but my main storage setup (music, video, backups, etc) was/is a RAID of old 7200rpm spinning SATA drives. At the time I just moved it to the new sever with a note to replace it later. Recently one of the drives completely died, so I knew it was time.

I picked up a https://www.asus.com/us/motherboards-components/motherboards/accessories/hyper-m-2-x16-gen-4-card/helpdesk_knowledge?model2Name=Hyper-M-2-x16-Gen-4-Card. This is a pcie gen4 x16 card that has slots for 4 nvme drives on it. My plan was to fill it up and raid those drives for storage and completely replace the old spinning drives.

The card was ~75$, but I see it's dropped to ~58$ now. The card is well made and solid. It has a really quick massive heat sink and a small fan (which you can enable/disable with a switch on the back). It also has lights on the back to show activity on each of the drives.

First, to use this card and see all 4 nvme drives, your motherboard MUST support pcie bifurcation. This means it has to have a setting to take a x16 card and split it into 4 x4 'lanes'. I made sure the one I got did, but even so there's some caveats that I ran into. First, on my MB, only the first x16 pcie slot can be set for x4x4x4x4, so the Hyper card must go there. With the Hyper card in pcie slot 1, and video card in slot 2, it would only do x8x8 on the first slot (so, only 2 drives of 4 are seen). Unfortunately I was unable to move the video card to the 3rd x16 slot because the heat sink on it was too large and would try and occupy the same space as a power connector. In fact, no video card I could find would fit in that slot due to the power connector. So, first, I just booted with no video card at all. Blindly typing my luks passphrases isn't much fun, but it worked and all 4 nvme drives are seen just fine.

So, after looking around a great deal, I finally found a x1 video card (my MB has 2 x1 slots. one of which I was using for a network card). Its mentioned in this Phoronix review: https://www.phoronix.com/review/asus-50-gpu Sadly, it's a nvidia based card "NVIDIA Corporation GK208B [GeForce GT 730]" but it's the only x1 card I could find at all. I also don't need 4 HDMI outputs, only one, but oh well. I got the card last week and put it in. Sadly it didn't seem to work, but I realized the second x1 slot has a cooling baffle right after it, and the card wasn't fully able to insert. So, I swapped it with the network card in the other slot and Success! Everything comes up right. (Aside the network card changing it's name due to 'predictable network names' and messing up the bridge it was supposed to be on, but that was easy to fix).

Cooling seems pretty nice on the card. The drives are at about 49C idle, and when syncing a few TB of data to them they only ever crept up to about 53C. High temp on these drives is 89.8C. I managed to pick up 4 4TB drives with a pretty good deal due to sales last week. Performance is great! As soon as I finish syncing content off I am looking forward to powering off the old spinning drives and retiring them.

If you're looking to add some more nvme storage and your MB supports the bifrucation settings needed, this is a pretty nice little card and storage expansion option.

20 years of blog!

Just a short post to note that 20 years ago (!) I posted the first entry in this blog. I've been rather busy of late and haven't posted too much, but it's still up and a going concern. :) Then, I posted about a long thanksgiving trip. This year I stayed very close to home for thanksgiving. Changes all around.

I plan some posts soon reviewing the amd/ryzen frame.work upgrade I got and the lovely asus hyper M.2 card (4 nvme drives). Also, some thoughts on open source and the like.

Here's to 20 more.

Some Fedora Infra stats in Nov 2023

Things have been crazy busy of late, but with Fedora 39 out the door and a week of vacation coming up I am finally starting to feel caught up. So, I thought I would share a quick post on some stats:

Number of instances in fedora-infra ansible inventory: 448

Instances here means bare metal machines, vm's on those bare metal machines and some aws instances. It doesn't include containers or the like.

Breakdown by OS:

273 are Fedora of some version and 175 are some RHEL version

Of the Fedora ones, 1 is f40 (rawhide-test), 39 are f39, 210 are f38 (all the builders are still on f38, going to be reinstalled with f39 soon), 13 f37 and the rest odd older things (our OSBS cluster which is slated for retiremenet).

Of the RHEL ones, 70 are 9, 59 are 8 and 46 are 7. Many of the ones still on RHEL7 are services we are working to retire or waiting on applications to be ported to RHEL9 (mirrormanager, badges, mailman3, osbs, pdc, mbs). Many of the RHEL8 ones are just tricky ones to upgrade like database servers or virthosts that house those database servers.

Likely in the coming weeks I will try and get a bunch more of those uplifted before the end of the year. There may be some downtime doing the databases, but hopefully it will be minimal.

4 month car update

So, we have had our rav4 prime almost 4 months now, and I thought I would post an update.

TLDR: Really happy with the car overall and am enjoying driving it

On economy: the 'guess o meter' now shows 52mi range on a "full" charge. I end up getting real world right around 50, or a tad below. I did realize from driving in EV mode to town and back a lot that it's actually downhill into town, so I use about 4mi or so less range that going from town back home due to regenerative braking and less uphill. My long errand run (when shopping for my mother in law the next town over) ends up being about 50-51miles round trip, so I end up switching to HV mode the last mile or two, but thats just fine with me. I don't have to care that I don't quite have enough ev range and try and charge on the way or worry about making it.

On gas usage, we got the car with a full tank from the dealer, and we used a fair bit of that up at first with a number of trips to portland and back and to the coast and back. I topped off the tank about 3 months ago and... its at about 3/4 of a tank now. I guess I should run in HV mode some more to use up gas before it goes stale. :)

The car came with a 1 month trial of sat radio, which we never used. I just play music from my phone via bluetooth and sometimes we listen to local radio. That said, the sat radio people really really want us to subscribe. I think I have an email from them every other day, they have actually called me on the phone, the car when you switch to radio defaults to a 'sat radio sample' channel. It's a bit anoying. Hopefully they will get the hint.

I wish there was a bit more data history in the car. For example, when you stop somewhere it displays a trip report type of thing, which is great, but if you happen to not look at it, or don't look quickly, it's gone and you can never see it again. I'd love for that to be a list you can look through with time, avg speed, avg kwH/mi, gal/mi, etc. If you have a charging schedule it also displays a thing about that, making the trip report being displayed even shorter.

Just loving all the lazy things: automatic wipers, automatic lights, automatic bright lights, automatic door unlock on putting you hand on the handle with the key on you, etc.

Overall, quite happy and hopefully this will last us many many years until EV's are super great.

Flock to Fedora in Cork, IE (2023)

Just got back from our first in person flock since 2019 and it was amazing. I thought I'd share my journey and thoughts from it here in longer form.

I did do some prep work the previous week in that I tried to make sure as best as I could that infrastructure and releng stuff was stable and wouldn't need any interventions from me, which mostly worked out in the end, with only some releng work (starting mass signing of f40 in prep for branching this coming week) and some troubles with the matrix/irc bridge (which I can't do much about aside from letting people know).

Travel to flock was fine. I took advantage of my favorite flight from PDX to AMS and then to Cork directly. In AMS I ran into Justin Flory and David Cantrell, who were both not planning on being there at the same time as me, but due to flight changes were. The flight to Cork was quick and we took a cab to the hotel.

The hotel was fine. It was a bit out of the center of town, which meant you had to take a cab or bus most of the time, but it wasn't too bad. The rooms oddly didn't have heating or cooling, but did have windows that opened, so I just kept mine open and was just fine. The conference center was in a seperate building behind the hotel and it was a bit weird to walk all the back there on the 3rd floor to get to it, but it worked out fine. The elevator had a amusing 'off by one' error in it. Ground floor was "1", the first floor was "2", etc. The food was all fine to me. The hotel did a buffet breakfast and did lunch/coffee breaks, etc.

I left Monday, but arrived Tuesday afternoon, got settled and looked around, then off to dinner with various folks. It was really nice. We went to a place called Market Lane and they did a pretty good job accomodating a large group of roundy nerds.

Wednesday the conference kicked off with some introductions from Justin and then Matthew Millers 'State of Fedora' talk. This was great as always, but sadly the streaming / recording had problems at the beginning, so Matthew redid the talk on the last day in the afternoon for a smaller audience. I think both were great. I think I liked the first one better, but might be because I was less tired. Next I went to Justin Forbes talk on the state of the kernel, informative as always. Then on to a roundtable about packaging problems in modern languages from Jens Peterson. Some good comments from a number of folks, but I am not sure we came up with much plan to help aside from moving to more bundling. I was hoping we could look at sharing tooling to handle large package sets, but we can discuss this more on other platforms moving forward. Then, on to a talk about the current state of Infra and Releng applications from Akash and Aoife. They made this talk really nice and fun and gave out prizes! Great work by them putting this together. Next I wanted to go to the RiscV talk, but somehow decided to go to the AI/ML in QA talk instead. Really interesting seeing how to try and use AI for our QA efforts. I'm not a big fan of AI/LLM's in general, but I think they do have uses, and this is a clever one. Next up was a state of Fedora CI. It's amazing how long it's taken us, but we are finally getting there with CI. Next to a talk about EPEL. Great to get more visibility for a subproject of Fedora thats so popular and useful. The great Flock international candy swap took place. So much cool and interesting candy. Someone (Carl!) even brought some jerky. Tons of good stuff. The day ended out with a lot of talking to various people at the game night, the hotel bar and then finally the hotel lobby when they kicked us out of the bar.

Thursday started in too early at 8:30 with a great keynote by Jen Madriaga on Diversity, Equity and Inclusion. Some very good points and things to think about. We can all be better here and help each other. Next up was a "Meet your FESCo" session. We only had 4 of the 9 FESCo members present this time, but we talked and answered questions and hopefully made some amount of sense. I then headed out to a talk on whats new in systemd. Wow, so many things I had no idea about. I hope the slides for this are up somewhere because even though I followed along on my laptop, there were things I didn't get to trying out. So many good things. Next was a talk on ansible packaging in Fedora / EPEL. An excellent overview from gotmax23, who I was finally able to meet in person. So nice to have someone take over the ansible maintainer mantle. I almost always enjoyed it, but I just don't have the time to devote to it that I used to. It's in good hands. Next was Matthew Millers discourse discourse, but it didn't really get into how to move it away from him doing so much and more got into a general discussion about it and how and if we should convince people to move there. Still lots of good info and things that were good to bring up. Next was the upstream colaboration in Enterprise Linux panel. It was all surprisingly cordial and for much of it the panelists seem to all agree on things. Then the evening events: dinner at a mexican food place and a scavenger hunt. I was wiped out, so I headed back to sleep after the dinner.Even though I passed out at like 8:30pm, I didn't really seem to catch up on sleep much. Seems to be what happens at flock.

The last day of flock, Friday, again started off at 8:30am. This day was devoted to a Mentor Summit. After a introduction I was on a Mentoring panel. This session was one of the best of the conference I thought. Some really great questions from the audience and from our moderator, Amita Sharma. She kept us going with great questions and the time flew by. My fellow panelists did an awesome job too. Next I went to a workshop I was running with James Richardson on revaming our onboarding and mentoring and docs about those in Fedora Infrastructure and Release Engineering. Surprisingly we had a really nice crowd of folks and we dived right in. Tons of good ideas and suggestions. As soon as we are recovered from travel, James and I will be writing things up for a round of review with the community and then we can dive in and revamp the docs and start trying ideas. Infra and Releng are great, fun areas to contibute and I look forward to onboading and mentoring a bunch of new folks. The workshop was scheduled to go on after lunch, but we lost a lot of people (either leaving early or going to other sessions), so we did just a bit and wrapped things up. Then off to the final night. There was a ghost tour, but I was tired and a bit footsore so I tagged along with some folks getting dinner in Cork and then passed out early for my super early travel back.

My saturday started at 4:30am or so. Got up, showered, packed and checked out and met the cab to the airport at 5:30am. Then flight to heathrow (which I had never been to before). Amusingly, my Brother had been vacationing in the area and was in fact flying back to the US that same morning. So, we met up in the airport, he got me into the lovely Virgin Atlantic Lounge where we had breakfast and caught up a bit. Sometimes these weird things work out. Next was my 9.5 hour flight to Seattle. That went mostly fine, I usually just read or listen to podcasts and ignore the world. I have to say that noise canceling headphones are sure nice for these trips. I picked up a new sony WH-1000MX5 set before this trip and they did an outstanding job. Tons of battery life, super good noise canceling. Landed in Seattle and then walked. And Walked. And walked. I think it was probibly a mile or two of corridors and up and down and around before getting to the passport control line. It was really a lot of walking after being in planes all day. Finally got past that and... my next flight was in another terminal, so more walking to a train and more walking. :) (I did over 10k steps saturday). Finally got to my gate and... they changed the gate. Got to the new gate and... they didn't have a driver for the bus from the gate to the plane. When they finally did they made me check my bag because of limited space. Finally off and landed in Portland, then in to wait on my bag. Then shuttle to the parking lot where I parked and finally the 2 hour drive home. Whew. Pretty epic day.

I have to say the hallway track was excellent as always. I had a number of really nice coversations with all kinds of people on all kinds of topics, from Irish taxes to Community Building to Mobile devices and hardware support to Boating to Weather to Books, etc.

So, whirlwind travel, but really really nice to see people in person again who I talk with most days over the network. I really hope next year we get more of the folks who were not able to make it this time around. As always flock leaves me body tired, but mind bursting with all the posibilities!