OpenStack as Layers but also a Big Tents but also a bunch of Cats

OpenStack as Layers but also tents but also cats

I'd like to build on the ideas in Sean's Layers. I've been noodling on it for a while and have had a several interesting conversations with people. Before I tell you how I've taxonomied things in my head, I want to spend a second on why.

Why do we care?

Our choices in organizing our work effect a few different unrelated things:

  • Who we are and what we all work on
  • What we release that we kinda expect people to ship
  • What an end user can count on
  • What an end user might find that would make life better for them

Amazingly, we've been trying to define all of those with one word. I'm pretty sure that's bananas.

Who cares?

As much as there are different reasons we care about this, there are different people who care about different aspects of the answer to this.

  • OpenStack Developers - us, the people who actually hack on OpenStack itself
  • Distributors - people who redestribute packaged versions of OpenStack software
  • Deployers - people who deploy and run OpenStack Clouds
  • End users - people who use deployed OpenStack Clouds

Even though I put "end users" last, I actually care about them a lot - but I'm going to speak to developers' concerns first, since that's who we all are.

Who we are and what we all work on

(This section is very OpenStack-developer centric. I cannot imagine that an end user or really even a deployer or distributor cares at all.)

Amongst other things, the OpenStack project is a community of people working on the common goal of "Open Source Cloud". If you're one of us, you should be able to vote in our elections and you should absolutely be granted entry for free into our design summits or any other meetings we have. I say "if you're one of us" because the interaction, collaboration and participation aspects are important to us. It's a sliding scale, so you can definitely be more in or less in. Some examples of community that we care about: being on stackforge rather than github; having a PTL who you elect rather than a BDFL; having meetings on IRC. "Do any of the people who hack on the project also hack on any other existing OpenStack projects, or are the people completely unconnected?" is a potential social touchstone as well.

All in all, meeting the requirements for "being one of us" is not particularly hard, nor should it be. We're not aiming to keep people out, we're a big friendly tent, but there also exist in the world people of ill repute who are trying to sell snake oil and fool's gold. While we want to grow our culture, we also need some sort of way to define what that is and keep detractors out.

This is OpenStack as a big tent - we want you here - we just want to check at the door that you're being honest, as it were.

What we release that we kinda expect people to ship

(Distributors obviously care about this, as do deployers. End users, again, could not care less.)

Just because you're one of us doesn't mean that everything you do is something that might be included in a collection of software called "OpenStack". I don't think it will be controversial to say that, CLEARLY, any work hacking on zuul is work that is part of our community, and also that CLEARLY zuul is not an element of the collection of software called "OpenStack." I pick an obvious and non-controversial extreme example to show that I'm not splitting hairs.

Now, "what is OpenStack" is a much large question as trademarks and defcore get involved, and is not at all what I'm attempting to even think about - but from our point of view as the people who release things, we have a clear stake in putting our name on the software we release that we think has some quality of OpenStack-ness.

Inclusion in this can actually have both positive and negative responses from folks that aren't us. "Oh cool! OpenStack has DNS now!" and "Oh crap! I had a thing that did DNS and now OpenStack has one!" are both are valid emotions. Nothing I can say here will prevent or invalidate some people having each of those emotions for each new thing we add to OpenStack.

There is another aspect, which Tim Bell brings up frequently: quality. Our current systems tells you only that a project has been integrated with our development and release cycle. It does NOT tell you in any way if that project is any good.

What an end user can count on

(This is the first time we find a subject an end user cares about.)

There is a set of things which OpenStack just has to have, or else calling that thing "OpenStack" is just being silly. Branding and trademark and product discussions aside, there is, deep down, some set of things without which the cloud in question is absurdly useless.

This is also important because it shapes how much logic and discovery an end user has to have in order to accomplish their task.

What an end user might find that would make life better for them

As an end user, I can tell you that, crazy as it might sound, I have the ability to look at things and make some decisions. I know, as a user, if there is a feature in my cloud, and I can choose to use it or not. If I'm an advanced user like Infra spanning multiple clouds, I might care immensely if a feature is ubiquitous. But for other users, using an advanced thing might be awesome and ubiquity is not interesting.

Base Level Touchstone

"Can an end user actually get a working VM without function X?"

Let's go back to the layers. I think the idea is great, and I'm going to quibble mildly with one or two of the assumptions and then propose a slightly different model.

I think Sean is right, for better or for worse, that starting with basic compute as a building block is spot on. I disagree though about stateless compute - but I think that's because I'm looking at this question from an end user POV, not a deployer POV. I'm sure that many deployers start very small with a stateless compute cloud. But as an end user, "basic cloud == minimal stateless compute" falls down for 2 reasons:

  • To run a thing on a stateless compute cloud, you actually MUST have the more advanced services. This is kinda the whole point behind Cloud Foundry, right? Stateless compute, PLUS, some way of putting your databases and object storage in the cloud so that it's not on the compute instance yourself.
  • The first step in the "Cloud Journey" is not "Cloud Native" - it's running traditional applications in the cloud. Infra started out this way, even though now we've got some crazy completely elastic driven-by-load-demand cloud applications - for the first year of OpenStack, everything ran on one single Jenkins server on one cloud instance and kept its data in local storage.

I'd like to keep Layer #1 as "Base Compute Infrastructure" - but I'd like to suggest that we define it with two touchstones in mind:

  • What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)
  • What does Nova need to count on existing so that it can provide that.

(Here is where we get to the first real concrete suggestion.)

For those reasons, I think the set is:

  • Nova
  • Glance
  • Keystone
  • Cinder
  • Designate
  • and we want Neutron to be here
  • and all the common libraries (Oslo and otherwise) that are necessary for these

For each of those things, Nova does not or will not have an internal replacement for the functionality. Nova REQUIRES that Glance exist to get images to deploy. Nova REQUIRES that Cinder exists to provide persistent disk volumes. Nova REQUIRES that Keystone exist for auth. And Nova REQUIRES that oslo.config exist to read its config file.

We gate these things together. If they don't work together, we should all go home. We also gate with tip of master vs. tip of master because we release these at the same time. Nova also REQUIRES a database and a message queue -but we don't need to gate on their master branches because we expect to consume releases of that external software. But Nova cannot consume releases of Cinder, because we also make Cinder, and we're going to release it at the same time.

These are also sevices that a user cannot reasonably provide for themselves within the cloud on a more pared down version of this list. A user can't provide their own reverse DNS because they don't own the IP space. A user can't provide their own network or storage, well, for all of the reasons.

And to build them, we rely on a set of shared libraries which grew over time from the a bunch of developers working on a bunch of projects together. We release these separately so that all our projects (not just the ones in Layer #1) can depend upon the same versions of things. This makes distributors and deployers happy, even if occasionally it makes life harder for developers.

Ubiquitous Wordpress Example

Every config management or orchestration example starts with a Wordpress example, so let's walk through that as a proof point of the above (assuming I don't want to rewrite wordpress to be an OpenStack-Native application).

Let's say I want to start a new blog, http://blog.inaugust.com. In general terms, I need a computer to run it on, an operating system on that computer, some disk space that doesn't go away so that my beautiful blog entries don't disappear, an IP address so that you can connect to my blog, and a DNS entry so that you don't have to type in my IP address to see my blog. Finally, I need to be able to connect to my cloud and tell it I need those things.

What does that look like?:

  nova boot --flavor='2G' \  # gets auth token and Nova url from Keystone
--image='Gentoo' # Nova talks to Glance to get this
cinder give-me-a-10G-volume
nova attach-that-volume-to-my-computer # nova talks to cinder
neutron give-me-an-ip nova attach-that-floating-ip-to-my-computer # nova talks to neutron
designate call-that-ip 'blog.inaugust.com' --also-reverse-dns-kthxbai # designate talks to neutron
# Log in to host and actually install wordpress

If any of those services are missing, I can't make a wordpress blog. If they are all there, I can. If you use juju or cloudfoundry or ansible to make a Wordpress blog, they are the ones responsible for doing all the steps for you. They may do them slightly fancier.

Also, please someone notice that the above is too many steps and should be:

  openstack boot gentoo on-a 2G-VM with-a publicIP with-a 10G-volume call-it blog.inaugust.com

Concrete Suggestion #1

Shrink the integrated gate and release to this and call it "Layer #1"

We need to test these things together because they make assumptions that the other pieces exist. Also, the set of things in Layer #1 should never change -- unless we refactor something already in Layer #1 into a new project. (You may notice that all of these projects were originally part of Nova.)

Once Designate is in, that's quite literally all of the things you need for a functioning VM.

We can call Layer #1 something else, but all the good names are overloaded and will cause us to have two years of arguments quibbling over the implications of wording choices.

Concrete Suggestion #2

Make Layer #1 assume the rest of Layer #1 will always be there

Inside of these pieces of software, assumptions should be able to be made about the co-existence of the other pieces of software. Example - nova shouldn't need a config file option pointing to where glance is, it should just be able to ask the keystone service catalog.

Next layers up

There are no next layers up

I think Layer #1 should be the only layer, because after that the metaphor of layers falls apart. Instead, let's talk about some groupings of stuff, as they relate to different end user categories.

There are "Cloud Native" applications, there are "User Interface" applications, and there are "Operator" applications. All of these fit into the tent, and some of them may have dependencies on others -- but they ALL depend on Layer #1 being there and being stable.

Also, all of this falls under the umbrella of "work done on the OpenStack project."

It is entirely possible that in the future someone might describe a user story that involves some combination of Layer #1 and some of the Cloud Native applications as being something we need to have a name for. Today we have this, we know that we need it, and there are almost no people who would argue that clouds don't need to provide the functionality.

Features for Cloud-Native Applications

OpenStack already has a bunch of services that provide features for end user applications which could be provided by services run within VMs instead. These are the services that get controversial because some people feel they cross various boundaries.

Whereas Layer #1 provides services that can be quite happily used and consumed by a traditional IT Ops person (how infra used cloud for at least the first two years), there are additional services that many people believe are essential in a cloud for people to write "Cloud" apps, but also other people vehemently believe are stepping over a line and do not want in a cloud. Let's look at two real world examples from Infra about the "Cloud Journey" and how our apps transformed from more traditional IT to more "Cloud Native".

Tranformation #1 - The Adoption of Trove

Up until earlier this year, all of the services that Infra runs that rely on databases just ran a database locally on the compute instance they ran on. This, it turns out, worked fine - and we never had an outage of any sort due to any related cloud issues. We did take backups and do the normal things that normal Ops folks do.

Earlier this year, we migrated most of them to consume databases provided to us by a trove installation that was availabe from one of our cloud providers. The upside for us is that now we don't have to manage them, our puppet code got simpler, since it didn't have to describe install and whatnot of databases, only that the service we were puppetting needed a database location and user as parameters. In short, we offloaded the care and feeding of an essential part of our infrastructure to our cloud provider - yet we did not have to change anything in our applications themselves. For those of you who didn't know it, the OpenStack gerrit has been running on top of Trove for at least 6 months now. Congratulations.

Transformation #2 - Logs in Swift

You may have noticed that we store log files, and that we store A LOT of them. Those log files are stored in a normal filesystem that sits on top of a cinder volume (ok, sits on top of a BUNCH of cinder volumes that we stuck into LVM) We've actually reached the limit of the number of volumes one can attach to a nova instance in that cloud, and each of the volumes is as large as they let us make a single volume - so we're maxed out. We could install ceph or gluster or something - (AFS, if you know us well) - but instead we're moving to a model where instead of copying them from the build hosts to a central log server, which is a very traditional IT way of dealing with it - we're working on copying them to swift instead. This is much more "Cloud Native" - right? The build hosts are already ephemeral, but they produce some "important" data, and we should put that data into object storage. This does, it turns out, require changes to our application code ... and that's fine. It's still a worthwhile thing, and it's a worthwhile feature for our cloud to offer us.

Both of these features fall into the realm of "our cloud has it, so we decided to use it" - and in neither case would we have been screwed by our cloud not having it. They're in the "reasonable people can reasonably disagree" place.

Concrete Suggestion #3

All projects should gate on their own functional test suite on top of devstack

Swift has a functional test suite that runs against a devstack install and only tests and gates swift. We should adopt this model for every Cloud-Native project. Since our projects talk to each other over REST APIs, if one breaks another in an assymetric gate, it means we're breaking something fairly fundamental that should have had a tempest test to test the API contract. If a nova change breaks horizon, there is absolutely no reason that it should take horizon to find that, since horizon is using the same REST API that a user would be. If and when it happens, someone needs to add a tempest test to cover the contract. This may hurt for a while, but ultimately should help us unwind the gate complexity and provide a more solid experience for our end users.

We gain two things by this.

First of all, although it may suck for a while, doing this should get us a bunch of interface and contract tests, which should make the fundamental things more solid, which means we can build more strongly on top of them. If Layer #1 doesn't work solidly, then there is not much of a benefit that the Cloud-Native projects gain in terms of desire for their existence. (If Nova doesn't work, why would anyone care about Trove?)

Secondly, it let's us open up the tent to more things being in "OpenStack Cloud-Native project land" without it needing to be a land rush. Because we're not putting Cloud-Native into the Layer #1 gate, it doesn't increase the multiplicative burden on the gate, or on the developers of those projects trying to debug failures in not-those-projects.

Reasonable people disagree about what Cloud Native applications belong in OpenStack, so if we ship a bunch of things that "we" clearly made, but that are a bit up for debate in terms of whether or not people want them, then we can let the market decide which of them are things that people can't live without and which are things that are only kinda cool.

Concrete Suggestion #4

Add opportunistic two-project gates

Glance has a swift backend. If swift is a Cloud-Native project, then it's not going to be in the Layer #1 integrated gate. But it would be good for glance to test the interface contract with swift in a gate. There should be a glance functional test that runs on a devstack that's configured with the glance swfit backend that should gate glance and not swift. Same thing as the others, if there is a problem, it means there is a test that swift does not have of itself, and that should get fixed.

Operations

It should be noted that Ironic and Ceilometer are both special cases because they are consuming 'internal' APIs. But actually, they are a different category of thing that we've started to see emerge, projects which aren't clearly user-facing and aren't cloud-native either, but are solving real problems which are typical problems for anyone running a cloud which ALSO need some degree of integration from applications in Layer #1 to be useful to anyone.

Just like Glance can be configured with a Swift backend, and because Swift is also an OpenStack project, and we should therefor have a gate check that Glance actually works when configured in that way, we should also have a gate check that Nova works when configured with an Ironic driver. The difference is that that is actually testing an internal API of Nova's - not a REST API.

I think we should think long and hard on this subject, because it's a good challenge, but I'd like to specifically not cover it here.

What we release

At this point, we can pull in just about anything we want to that we think is a good idea into the OpenStack bucket. The barrier to entry is "Are you working on OpenStack / Are you one of us?"

We still haven't addressed Quality, though.

Right now, because the integrated projects list is awkward, there is a rush to get in so that your project is allowed to add references to itself into the other projects in the integrated release. It also means we're not really giving a mechanism to try new ideas, other than fully accepting them into a tightly knit club. It means that, for process reasons, we need to graduate things before the market has had a chance to test them out. Finally, it means that people want their project to "be in" as a thing of value in and of itself, rather than what it really is, which is giving up autonomy and submitting various project decisions to the collective will.

Concrete Suggestion #5

Tag projects as "Production Ready" when they're good enough to recommend using for the Cern LHC

Rather than tagging things negatively like "beta" or "experimental", positively tag things as "Production Ready". If a project does not have a "Production Ready" tag, we're still releasing it, and we're still standing behind it process-wise - but it may or may not have seen a lot of real world action, so you may want to be more careful before deploying it for mission critical things. I think we should be stingy in granting this tag - today, I'd say we should start with Swift and Layer #1 and that's it. That doesn't mean I think the other projects are bad - just that they need some bake-in.

What does that look like? We release OpenStack. We also tag some things as Production Ready. That is an arbitrary determination made by an elected body. There are no exact criteria except for convincing the TC that you're project is good, and making each member of the TC be willing to say to other people that they think it's solid enough to tell Tim Bell it's ready to deploy. If that's not a strong bar to meet, I don't know what is. Seriously, we should just make our motto: "If it's good enough for the Cern LHC, it's good enough for your private cloud."

It should be noted that while there are no criteria to pass, if a project doesn't have tests or docs in the official places that were contributed by the project's doc writers, project's qa and project's infra folks, it clearly isn't production ready or cared about by enough people yet.

"You're an OpenStack thing" is a Process determination

"You're a Production Ready OpenStack thing" is a Quality determination.

Global Requirements

RedHat, Ubuntu and SuSE are all active and valued members of our community. They show up. They participate. They're certainly one of us. They have releases. Those releases, because of the way their repos work, require that it is possible to install all of the software from a single set of dependent library versions. It's not possible for Ubuntu Utopic to have two versions of python-requests. Even though a large production deployment might have two completely different teams of people deploying nova and swift onto completely different fleets of hardware possibly even on different base OS's, it is essential that everything we release be able to work inside of a pool of a single version of each thing.

Concrete Suggestion #6

EVERYTHING participates in a single install-only devstack gate

In addition to the Layer #1 gate, each project is going to be gating on its own devstack-based per-project functional tests. But, we have to maintain that our single list of requirements results in something, so there should be a devstack install that is literally everything we've decided to release. Like, stack.sh --everything. We will not run tempest against this like we do in the integrated gate ... we'll only run stack.sh and then exit. What we'll be testing is that nothing in our tent will cause any of the services to not be able to start. It has to be a shared gate across all of the projects, because we need to make sure that no patches to anything cause anything else to not be able to install (like through some unforseen python library transitive dependency ordering issue - don't laugh, it's happened before). The likelihood that this gate ever breaks other people should be very low.

Now, does each service work? That's a thing to be tested in the individual project's functional tests in their own gates, which should each all probably have a test config that runs their functional tests against a devstack --everything cloud too.

User Interface

No matter which things are in or out of a cloud, there are a set of end-user tools we produce to make things better for the user:

  • Horizon
  • Heat
  • python-openstackclient
  • python-openstacksdk

At the end of the day, I think these should exist to serve the end user quite divorced from what choices the deployer of a cloud may choose to make. I think they should know how to optionally interact with almost anything that we've chosen to release - because it is more helpful to the end user if they are expansive in what they understand. Remember, an end user does not care about what we decided to release or not release in icehouse.

Concrete Suggestion #7

User interface tools should be aggressive in talking to anything we release.

The instant we confer any amount of legitimacy on a project, its patches should be welcome in the user inteface tools, because doing so gives the end users more options. Horizon, for instance, should already have Designate panels in it, and Manilla should be landing patches in python-openstackclient. If the MagentoDB folks want to wite a heat provider, we should figure out how to make that work.

Concrete Suggestion #8

All user tools which are based on Layer #1 APIs should be able to be used standalone.

Deployers make choices - end users do not always care, and since Horizon and Heat both consume APIs, there is no reason why an end user should not be able to chose to run them if they want. For instance, I should be able to spin up a Horizon and use it to interact with Rackspace, even though Rackspace does not chose to deploy Horizon. For another example, HP has an internal public cloud customer that has been using Heat in production at high volume for over a year. HP public cloud does not run Heat - the internal customer runs a Heat and points it at the cloud end point - and they feel as if their experience is great.

This goes past just Horizon and Heat though. If a Layer #1 cloud is a basic thing that I can and should be able to count on, then I, as a user, may want to install a standalone trove or sahara that can use my cloud account to do its thing.

Concrete Suggestion #9

All user interface tools should be multi-cloud aware

Even if HP and Rackspace both deployed heat, Infra still wouldn't be able to use it for nodepool, even though the usecase seems like it would be met because our single pool spans clouds. If the tools can be used standalone, they also need to be able to configured to point at more than one cloud. For instance, I'd love to be able to run a horizon myself and point it at both of my HP cloud accounts AND my Rackspace accounts and have it show me all of my stuff. (I'd really love it if the HP Cloud Horizon would let me register my Rackspace endpoint and credentials, but let's not get too greedy, that's trying to force a product choice on a vendor - I'm talking about end user choice here.)

Concrete Suggestion #10

End user tools should adopt a rolling release model

As an end user, I do not care about icehouse vs. havana. I want the current state of end user tools to work with the current state and previous states of existing server tools. NOW - heat and horizon can also be installed IN the cloud - so they should adopt the swift rolling release plus coordinated release model. That way we can release a horizon known to work with the version of nova when we release them together, but also a person running a horizon or heat standalone doesn't necessarily have to know - or to wait 6 months for updates.

The Design Summit

As may be clear at this point, the importance that we do things together and in the context of the project cannot be overstated. One of the most important things about our developer community are the design summits. We've already long ago outgrown the point where all of the people who need to be in any given room can be in the room and we're having to learn new lessons about what needs to be summit material and what needs to not be summit material. On the one hand, rethinking how we do things away from Incubated and Integrated makes the task of allocating room space and timeslots much harder. On the other hand, if the summit is mostly about cross project and project interaction issues, and less about purely internal project discussions, and if we're seeing growing benefit in using portions of the time for things like the Operator's summit and individual project meetups, then we should move forward with that worldview.

Concrete Suggestion #11

Design summits should be strongly focused on cross-project and user/operator interaction.

If we focus the bulk of the pre-scheduled summit time on hard cross-project issues as well as scheduled time to sit with Operators and End Users, then we could actually defer a good portion of time and space scheduling to be more real time in response to Operator/End User feedback. That way we can make more efficient use of the time we are together, and if there is important information or requests from either of our classes of users, we can start to deal with it right then, rather than waiting for six months until our process allows us to schedule it.

Conclusions

What does this all mean?

It means we get four new words/groupings:

  • Layer #1
  • Cloud-Native
  • User Interface
  • Operator

And a new tag actually related to quality:

  • Production Ready

We admit that there is a base-level of essential functionality that just HAS to work. This is Layer #1.

We have a nod somewhere that we care about users at all.

We can be more inclusive without also being more exclusive towards other thoughts. The Cloud-Native bucket will rightfully get quite expansive over time.

We no longer have the quality/readiness chicken and egg problem.

We can actually let the market decided more on the relative importance of things without us needing to predecide that.

We can take a much stronger position on some of the topics, such as "Compute instances need IP Addresses" without having to put ourselves in the position to take such a strong stance on everything else.

Who's with me?

0 comments
Tags: openstack

AWSOME has the potential to be really awesome

Earlier today, Canonical announced AWSOME

I'm honestly pretty stoked about it. At the last UDS, Mark Shuttleworth expressed his concern over the state of AWS compatibility and how important it was.  As part of follow up conversations, Vish, Gustavo and I talked about ways to address it, and the one which everyone was the most pleased with was the idea of a gateway service that could talk AWS on the one side and OS API on the other. This would allow the code paths inside of Nova to become simpler, and innovation at the API layer inside of OpenStack could proceed as architecture dictated. At the same time, as a separate project, developers on a gateway wouldn't have to get nova core devs to care about AWS APIs at all, and themselves could write the service to be as robust and full-featured as they want.

Decoupling FTW!

Anyway - I'm at conferences a lot and I have conversations with people alot about ways in which they can help to solve the problems they are experiencing. It's not nearly so common that people step up to the plate and just fully own doing something about it - so I'm extra excited about AWSOME's existence for that reason alone.

At the moment it's listed on Launchpad as being AGPL (although this is not reflected in the source tree), which obviously would exclude it from being an official part of OpenStack, and probably from being deployed on any of the public clouds being stood up. However, if the details of getting it licensed Apache can get worked out, I would certainly personally support including it in OpenStack.

Thanks for the work Canonical! I'm excited to poke/learn more.

3 comments
Tags: ubuntu

death by a thousand cuts

It's amazing to me what features drive decisions when choosing a technology. In my case, it's a clock applet, but let me set a little bit of a context first.

I stopped configuring my UI environment several years ago, opting instead to use the experience that had been designed for me by the fine folks at Ubuntu. This wasn't entirely just blind trust or pleasure - but rather that the defaults were sensible enough, and I wanted to be in the business of doing things, not spending an hour deciding what font I wanted my desktop to display. I believe I've been doing this since dapper, if not earlier.

Until now.

I tried. I mean, I've bitched at Jorge some in person, but I ran Unity starting with Natty up until last week. I ran it as provided, as intended, and I tried to learn to think about things in the way it was asking me to.

Unity is generally a decent piece of software. I don't hate it by any means and it is certainly workable. I can see how, if one wanted to design a single user interface that would work on laptops, tablets and phones that Unity might be what you'd end up with. There are weirdnesses, such as alt-tab having become counter-intuitive and seemingly non-deterministic. That the launcher buttons launch a program the first time you click them and switch to the program subsequently makes sense for every application I run - except for terminal windows. Of course, since Unity isn't designed with me in mind (even though I'm a stalwart and loyal Ubuntu user who evangelizes it to eveyone I meet) it's to be expected that UI behavior around having 20 different terminal windows open might fall through the cracks 

None of the things I didn't like about Unity were monumental though, and I learned to deal with them in the spirit of being a good sport and knowing that sometimes initial distaste is really just distaste for change itself.

On a lark last week, while reading about Mint, I decided to give the Mint Gnome Desktop Extensions a try - which meant installing Gnome 3 - so I gave that a try too. Same thing, really - gnome-shell, MGDE - they're both fine. They're both weird in a their own way, and I'm sure I could get used to both of them if I cared to spend the effort - but they aren't any better than Unity, nor is Unity any better than either of them. They're all just new and weird and will take getting used to for a person who has used and loved an X11-based desktop as his primary interface since 1999. I've got habits. I expect them to work. On urging from a friend, and since I was already trying out alternatives, I gave XFCE a shot. It's also fine. It behaves more like how I'd expect things to behave than the others do, that's for darned sure.

So I had some alternatives, and they either fixed some things I mildly cared about, or they didn't and just chose their own unique ways to be weird... essentially a wash.

Except for the one thing.

The thing that, it turns out, has become the one must-have feature for me. The thing that I had before and now have lost. And the one thing that I tried in each of the environments to find a good solution for.

And failed. 

And that's the Gnome2 Clock Applet.

I have, on more than one occasion, lorded it over my friends who are silly enough to run something that isn't Linux about how bad-ass my clock applet is. They have nothing like it. It's a feat of UI brilliance. It works like a normal user expects a clock to work, and then it has additional features that are perfectly discoverable without having to read documentation. So it's got all of the power that a power user might want, and yet has sensible defaults and behavior if you just want it to be a clock.

Let me tell you some of what it does for me:

 

  • It is a clock.
  • In a very succinct way, it also shows me temperature and weather.
  • When I click it, it shows me a calendar, and an expandable list of locations.
  • It lets me add a set of locations
  • It shows me those locations as dots on a world map.
  • That world map has a daylight/nighttime line drawn across it.
  • It shows me the times of all of the locations, as well as the weather indicator.
  • It lets me change my location by clicking a button. 

 

NOW - the pure UI designers out there will scream - why does your clock show you weather information? That's unrelated to the time!!!

See - that's what's brilliant about the applet. It seems to understand that it's not actually a clock - even though that's the first element of it that you see. It's a location information and management applet. For a clock to truly and properly work these days, it kind of has to know where you are in the world (a feature of all clock systems on all computers at this point) Once it knows that - well then - why not be a gui interface to both managing that location and providing information that is dependent on that location. For someone who travels as much as I do (none of my family ever know what city I'm in) it's a godsend. With one click, I tell my laptop where I am, and it keeps a summary of essential information about that location in a useful location quickly within my eyesight. It recognizes a use case - a real use case. It recognizes that my location may not be a fixed quantity, and that I might want to deal with that in a seamless and sensible manner. It also recognizes that, in addition to just being able to change locations - I might work regularly with people all over the globe, and sometimes it's really handy to be able to simply and easily see whether it's appropriate to assume that they are awake or not. On top of all of that - if YOU happen to not need any of that, you don't have to know that any of that is there.

Unity has a clock applet that lets me switch locations - but no weather. I have to add the weather indicator for that, and then I have to maintain two location lists and update it in two places.

Gnome3/MGSE is even worse - the weather indicator that it has doesn't seem to support location lists - only your current location - and you are required to enter that location using cryptic weather station id codes.

XFCE's clock doesn't even support showing me the current date.

So I've decided something. For now, I will run Gnome Classic (aka Gnome 2) and I will continue to enjoy my user interface experience, complete with consistent operation of all of the buttons on my computer and an amazing location application that is unmatched across any operating system. Gnome 2 was the last thing that both worked well and was designed with me as a target user. When such a time comes that I am, for whatever reason, prevented from running it, I suppose I can sit down and port the features I need to whatever new environment I have to run - but it would be really outstanding if instead the people running the project that I'm ostensibly a member of started caring about me again.

11 comments
Tags: ubuntu

Two subjects are one too many for a blog post

It's my turn to apologize. Andrew and I apparently really angered people by being upset about something last week, and for that, as he already has, I apologize. I don't like making people angry or upset.

I believe Henrik made an excellent point, which is that for various different reasons, there are those of us who were upset when Oracle bought MySQL and yet felt complelled to not communicate this publically. To be honest, emotions related to a business transaction ARE a little weird, so I'm not sure it's completely odd that people don't know how to appropriate express them. But as Henrik rightly pointed out, the Oracle takeover has been the elephant in the room (sorry Postgres - it's not you) and we've all been spending a good amount of energy NOT talking about it, because talking about it only leads to people getting upset. As I said before, I don't like making people upset, so I'll try to keep my comments there to myself for the most part.

I'd also like to apoligize for writing a blog post with too many thoughts. I only included the discussion of the naming as what I thought was a humorous take on the backstory of why I was writing in the first place, I see the folly of my ways there. In the future, if what I want to talk about is annoyance at people eye-rolling at my passion for Open Source, I will endeavor to only talk about that. That way, with a single topic post, when it's referenced other places, there will be no confusion.

To sum up, I am sorry for causing any confusion or any anger or for making anyone upset.

0 comments

Oracle do not, in fact, comprise the total set of MySQL Experts

There's been quite the thread on Google+ (my how technology changes quickly...) over a comment Andrew Hutchings made on an Oracle MySQL Blog Annoucment for their new "Meet The MySQL Experts" Podcast. I should have ignored it - because I honestly could not give two shits one way or the other about Oracle or any podcasts that they may or may not decide to broadcast. But to be straightfoward about it ... the title of the podcast is ludicrous. In case you were wondering, "The" in English is the definite article and implies a singular quality to the thing that it describes... effectively implying that Oracle's MySQL Experts are, in fact, the only MySQL Experts. We all know that's false- Percona and SkySQL are both full of experts as well - likely have more MySQL Experts per-capita than Oracle does, as if a per-capita measure were important. Of course, as Matt Montgomery pointed out, there is absolutely no reason for Oracle to point people toward's someone else's experts ... and that's fine. It's just that there are other ways to phrase the title that still assert Oracle's product and trademark and which are not, from a purely grammatical sense, lies. "Meet Our MySQL Experts" or even "Meet MySQL Experts" or "MySQL Experts Talk to You" or "Hey! Look! MySQL Experts are going to drink Black Vodka!" (ok, probably not the last, since that would point people to MariaDB - but it is at least a true statement... MySQL Experts WILL, inevitably, drink Black Vodka)

As I said earlier though - I don't really care about Oracle... they have no impact or meaning in my life... so if they want to either play silly grammatical games OR be unaware as to the actual meaning of words in English - that's fine. But then Matt Lord said something that really pissed me off:

 Any religion and its dogma can be problematic in the real world, whether or not it involves any kind of deism or not. :)

Too often people confuse FOSS with the cathedral and the bazaar, shared development, shared ownership and other high minded ideals and frameworks. In the end, it's a trademarked and in-house developed product that is released as FOSS. It's not a cross, don't try to impale yourself on it. :D

It's not that big of a deal people! We're surrounded by beauty and tragedy, this is just work.

Now, first of all, I like Matt Lord. And with that in mind, I have the following to say:

I am fully in support of trademarks and trademark protection. I am fully in support of people making a living doing what they do - especially if they are doing it by providing a service. I recognize that Oracle owns the trademark MySQL and can do with it as they see fit.Oracle does, in fact, own the product called MySQL, with all of the rights that go along with that... and honestly I do not think they are being bad shepherds of that product. Whether I like Oracle or not, it is undeniable that they are now a part of the MySQL picture, and I say good for them.

The reason I get pissed off is the attitude that it's not that big of a deal. The MySQL trademark and the business around MySQL is a BIG DEAL to Oracle, and if I were to try to put forward the opinion that they should just, you know, stop caring about it, people would think I was crazy. Why is it so unreasonable then for me to care about the portion of this that I happen care about? Why is it not ok for me to NOT be in this for the money, for me to NOT be in this just as work?

I think it might be worthwhile reading The Cathedral and The Bazaar again - because it describes the two different models you are talking about rather than being a single entity that one might confuse FOSS with. The Cathedral, as described in the book, is the model traditionally taken by the MIT and Gnu-derived projects,  (although emacs has a more open dev model now) and is currently also employed by Oracle on MySQL. In fact, it has been the MySQL model for quite some time - well before Oracle entered the picture. It involves a mostly closed dev process from which code drops are made unannounced and at the whim of the folks in the Cathedral. It's not de-facto a bad thing, it's just a description of a process. With the Cathedral, ironically enough, it is the ideals of Free Software (that the software itself be free) that are more important and that an open development process is less important. The Bazaar, on the other hand, is the process Linux uses - where all of the development is done in a distributed manner and in the open. The assertion in the book, and one of the philosophical differences between Free Software and Open Source (which makes the use of FLOSS or FOSS completely ludicrous) is that having an open development process is more valuable than just the software being free, although the by-product of an open development process is that your software sort of has to be Open Source. The irony here that I mentioned earlier is that, of course, Oracle approaching its Free Software offerings via the Cathedral model gives it none of the benefits you would think a corporation might want from an arrangement such as Eric Raymond's Open Source Bazaar model might afford them, and instead themselves choose to operate under a set of zealous ideals much more akin to Richard Stallman.

I'm sure that analogy is not pleasing to either Stallman or Ellison.

Although I understand that the ideals behind Free Software may not be important to you, I do not think that there is any constructive reason in the context of a discussion about Oracle's business practices asserting trademark ownership to imply me subscribing to those ideals is silly. It would be very difficult to accurately describe the success of any of the currently valuable pieces of Free Software as not due in any large part to those of us who routinely impale ourselves on the cross of Free Software. MySQL AB's business strategy itself, which involved attaching FUD to discussions of the GPL to incite people to buy licenses that they quite simply did not need ... (a perfectly valid if devious business strategy) was predicated on the existence of such an enormous shit-ton of users that they could focus on converting a percent of a percent of those users into customers and still wind up selling the business for a billion dollars. That shit-ton of users grew out of the emergence of LAMP as the dominant pattern for the Web. LAMP arose because it was technically much better than any of the alternatives... and the pieces of LAMP became dominant because of the work of a set of people who do, in fact, care about the ideals of either Free Software or Open Source.

You seem to be quick to put things in business perspectives and to remind people that it's ok for Oracle to do business. I agree. It's ok. But we wouldn't have had MySQL to work in the first place for if it wasn't for a bunch of people for whom it was not just a job, for whom it was not just work and for whom the ideals you are looking down on are not silly things.

So disagree with me all you want to about the effects of Oracle's choices on the health of MySQL. Defend Oracle all you want to on whatever terms you want, in whatever way you want to define a set of values such that they are positive. I'm right there with you on some of it, I might disagree with you on other bits, and that's just life and how we go on being people ... but please do not smirk and snicker and roll your eyes and tell me that the things that I think are important are not. I assure you, I find them to be very important and I do not believe I am the only person who does.

9 comments