For the past week (September 9-14), I've been down in Bellevue, WA for my second Core Python sprint. The sprint is hosted by Microsoft, and funded by The PSF.
Last year, I was only able to stay for the sprint for three days. This year I'm able to stay for the entire week. I got a lot more things done.
Thanks Steve Dower, for organizing this sprint and figuring out all the logistics for us. Coordinating this is no easy task, and everything runs smoothly.
Big thanks to Microsoft for hosting us in Reactor Redmond B20. The facility and the campus was really awesome. (Ice cream at the back of the room!) I got to try out the Hololens! Thanks for treating us for a nice dinner, with guest speaker Anders Hejlsberg.
Thanks to The PSF for providing the fund, getting many core devs from all over the world. It's been valuable being able to spend the whole week collaborating and talk face-to-face. We really don't get too many opportunities to do this. Sprints at conferences are normally our opportunity to help onboard new contributors to Python. Core sprint like this serves a different purpose where we get to focus on some of our bigger projects, brainstorm with each other, and make decisions.
Thanks to Zapier, especially my manager, and my teammates at the Partner Engagement Team, for being supportive and giving me the time away from work. Zapier has flexible time off policy, so I'm not limited to only taking X days off from work each year. In addition, I get to focus on Python for the whole week, and not needed to keep doing daily standup or weekly team meetings.
Thanks to my family, for supporting and being able to manage themselves while I'm away. I have been traveling away without them, for work and for Python, once a month each month since January of this year, and even more of such trips that's been arranged all the way until August 2019. It really has not been easy for all of us. But it motivated me to make the most of my time at the sprint.
Thanks to the core developers who came, away from their home, from their family and work, for Python. We had many great discussions and chats during the past week, they've all been very productive, and I really enjoyed being able to spend time with everyone for Python.
I've posted my daily updates on twitter.
Here's the copy-pasta 🍝 version:
Day 1 report:
- automerge bot
- reviewed Larry's fake f-strings hack
- CLA discussion with Brett
- #PEP581 discussion with Ezio
- helped Ethan/Carol with git
- trying to name project
- started new unnamed project
- wrote emails
Day 2 report:
- show off automerge bot
- CLA discussion with Brett
- chat about workflow, bots, #PEP581 with Kushal, Victor, Pablo, and Guido
- helped Ethan with git
- decided on new project name: blurb_it
- wrote emails
Day 3 report:
- #PEP581 group chat with Ezio, Ned, Pablo, Petr
- work on blurb_it
- core dev group chat
- small enhancement to automerge bot
- bought an xbox
Day 4 report:
- dealt with CoC case
- chat about aiohttp async task queues with Andrew
- work on blurb_it
- PSF infra chat with Ernest
- core dev group chats
- gave lightning talk about miss-islington and blurb_it at Microsoft meetup
Day 5 report:
- miss-islington rate limit issues 😕
- self-declaration as "interested in emoji" 😎
- Python language summit planning with Łukasz
- deployed unofficial blurb_it
- exhausted 😩
- three shots of tequila 🥃
New Python Core Developers
Welcome Emily Morehouse-Valcarcel and Lisa Roach as new Python core developers! They're both brilliant, and have been contributing to Python in the past couple years.
miss-islington is a GitHub bot that first developed at last year's core sprint. The original task of miss-islington was to automatically create backport PRs.
Over time, she gained new privilege. She's been allowed to merge her own PR once it has been approved by another core dev.
This week, miss-islington gained a new superpower. She can now automatically merge any pull requests that core dev wants her to merge.
As context, there had been long and lengthy discussion about this "automerge" ability. Discussion started in core-workflow issue #29, opened by Donald Stufft February 2017. It is one functionality that GitLab has that improves productivity, that GitHub does not yet provide. And as indication of how complicated CPython workflow is, further discussion about this spilled over to bedevere issue #14.
While we have come to the resolution and decision of how the automerge should work, I guess I just have not find any time to completely focus and work on implementing it. I spent almost the whole day of the first day of sprint to implement this. I don't normally have that "one full day to focus and undisturbed to work on Python" thing, so needless to say, I might never get this done without coming to the sprint.
If you're curious of how this was implemented, miss-islington is open source. In addition, I will give a keynote speech at DjangoCon US 2018, titled Don't Be a Robot; Build the Bot where I plan to describe some of miss-islington's architecture and challenges in building it.
These are the pull requests related to automerge feature:
- Automerge PR labeled with "automerge"
- Normalize commit messages on automerge
- Don't automerge if there is "DO-NOT-MERGE" label and "CLA not signed" label
Open issues that we will need to address, eventually:
- Rate limited since the automerge has been deployed.
- Keep track of who triggered the automerge
- Miss-islington should not self-assign PR that failed to backport
- automerge shouldn't let GH wrap the first line of the commit message
Part of our requirement for every pull requests that you send to CPython, is a news entry describing the change. This had been heavily discussed and designed in core-workflow issue #6, and we now have Larry Hasting's tool, blurb (available on PyPI), to help with this. In addition, we have a status check in place by bedevere to ensure that the news file exists, and that we don't forget to add it.
Larry's blurb works really great, and everyone has been using it without issues.
My personal nitpick about this, is that (and it's because I'm very lazy), blurb is a command line tool, and so I have to be on my computer in order to use it. A lot of time, I found myself reviewing a pull request while on my phone, and all it needed was a news entry. But since I was on my phone, on transit, by the beach, or whatever, I could not complete this process, and had to wait until I'm back on the computer.
Before starting to write this project, I needed to come up with a project name, and of course it was the hardest part of the process. I had thought of unimaginative names like, "Blurb on the web" or "blurb_ee (ee: extended edition, pronounced blurbee)". Gregory P. Smith suggested "Blurb 2.0". It sounds okay, except that I'm not actually implementing a new "blurb". I'm trying to provide an enhancement of "blurb add" command, so it is only a subset of the full "blurb". "Blurb 2.0" sounds incorrect to me. Larry himself suggested: "webLurb" (pronounced webblurb, or maybe we blurb?). In the end, I went with, "blurb_it", to which Guido responded: "ship_it". 🚢
There are still more work to be done for blurb_it:
- Write unit tests
- Add travis CI integration to the repo
- Transfer app ownership to The PSF
- Make it as a GitHub App instead of an OAuth App
- Add form validation that the news entry has to be more than 30 characters
- Ask for Andrew's feedback on best practises when using aiohttp-session
To be continued
That is all the time I have to write up about the sprint. I'm heading back to Vancouver, and will be out-of-open source for the rest of the month. But I plan to continue with more details of some of the other projects.
Update: Part 2.
Thanks for reading.