Distributed Systems and the End of the API (meta)

I just published a written distillation of my talk at PhillyETE 2013, Distributed Systems and the End of the API, over at the “writings” blog for the Quilt Project.

Take a look, especially if you are interested in distributed systems, CRDTs, the general suckage of how APIs work, or if you’re curious about what this Quilt thing is all about. (Spoiler / hint: the talk+post isn’t strictly about Quilt, but is very strongly related.) My inspiration for writing up the content of the talk comes largely from Michael Bernstein’s writeup of his RICON West 2013 talk, Distributed Systems Archaeology. Giving talks can be a compelling way to spread information and evangelize different (and hopefully better!) ways of thinking about problems, but depending on slide decks and video dumps is a poor way for people to discover and access that information. Giving it a proper home that is easily searchable and accessible to all (despite visual-, aural-, attention-, or time-related disadvantages) seems to be a total win, especially if one is hoping to have a lasting impact.

My apologies for the piece being so long (~7,250 words). I really didn’t want to disturb the single narrative, and so resisted splitting it up onto parts. Hopefully the length does not detract from the message. It’s very clear that my biggest failing as a writer remains my verbosity, something I’ll be watching closer than usual for the rest of the year. It would not hurt to work on more tightly-scoped, concise pieces, if only to be able to put some bounds on the time spent on the writing itself.

Posted in Quilt | 2 Comments

Theorizing the Web, an experience

Last week, I attended Theorizing the Web (TtW). I can say without hesitation that it was one of the most challenging, enlightening, and useful conference experiences I’ve ever had.  I’d like to provide a summary account of my experience, and maybe offer some (early, I’m still processing) personal takeaways that might be relevant to you, especially if you are involved professionally in building the software and technology that is part of what is theorized at TtW.

The first thing you need to know is that TtW is not a technology conference. Before I characterize it positively though, it’s worth considering the conference’s own statement:

Theorizing the Web is an inter- and non-disciplinary annual conference that brings together scholars, journalists, artists, activists, and commentators to ask big questions about the interrelationships between the Web and society.

While there were a few technologists in attendance, even fewer were presenting.  As it said on the tin, TtW was fundamentally about the social, media, art, legal, and political aspects and impacts of the internet and related technologies.

Before I enumerate some of my highlights of TtW, I want to offer some context of my own, a thread that mostly winds around:

Why did I go?

As a guy that creates software professionally, I have historically been fundamentally unconcerned with how what I build interacts with and is informed by the non-technical, those obfuscated social and political and higher-order economic forces and undercurrents that define how we interact and relate to each other.  In the typical engineer (and business owner) role, I have generally sought to optimize strictly for utility.  It’s not that what I’ve built has necessarily been to the social, political, or economic detriment of others; it’s that I’ve very rarely thought of how what I build defines and is defined by those forces.

In hindsight, that I developed this blind spot is surprising to me.  I received a fundamentally non-technical education at very liberal liberal arts college in New England, and so was aware of these topics to some degree in an academic setting and very personally, having come from a working-class family of little means compared to my peers. (I vividly recall being gobsmacked by the casual just-so way in which some other students talked about the Benz and Beamers their parents sent them off to school with.)  Further, I have always been politically active and consider myself to be socially and culturally aware (though, don’t we all?).

Nevertheless, that academic background clearly did not insulate me from the inuring socialization in a engineering culture. Part of the process of being “professionalized” within the software business and engineering practice was being trained in the often explicit denial of the political and social nature of our work. The manifestations of this are seen everyday online and in in-person dialogue, including adopting a live-and-let-live stance with regard to blatantly exploitative internet businesses and technology; minimizing the impact and importance of non-technical collaborators whom we rely upon or non-technical work we do ourselves in order to succeed; and dismissing by default any technical decision that is not made on strictly technical grounds.

To summarize, I’ve recently had two (related) realizations:

  • I find myself working within a community whose collective discourse is one where technical optimality is paramount and prioritized, even when non-technical concerns are known.
  • I personally have historically not given due consideration to how what I build is affected by its (and my) social, political, and economic context, or how what I build in turn affects others.

These things dawned on me over an extended period of time as I contemplated the full scope of Quilt (my current project).  While it is a software project with a technical footprint — it will manifest as code and libraries and tools and services — my objectives are fundamentally non-technical, and rooted in non-technical impacts of existing software and technology.  Considering that prior to this realization, I thought Quilt was a purely technical response to set of purely technical problems, this was a cognitive break on the order of nothing else I’d experienced before.  The obvious conclusion was that, to be successful, I must purposefully locate Quilt with respect to those powerful undercurrents that dictate the human condition. What I wanted to do was defined in large part by exigent sociological, economic, political, and legal forces, and the product of that work would hopefully have an impact on them in turn.

Once internalized, it was clear that my context and professional history guaranteed that I was not yet properly equipped to thoughtfully approach these questions.  So, just as I’ve been pushing myself to bone up on certain scholarly corners of technical domains, I started seeking out ways to become more aware of and attuned to the non-technical influences on and impacts of my work.

Attending Theorizing the Web was just the latest chapter in that effort: here was a gathering of mostly non-technical academics, educators, activists, and other commentators talking about how people are using, benefiting from, coping with, and being victimized by internet technologies and their applications. I wanted to gain their perspectives for exactly the reasons why I suspect many of my peers might discount them.

With that out of the way, on to some highlights and related thoughts.  There was much, much more to take in at TtW than I can or will make mention of here. Residuals of the streaming process are available for viewing now; it looks like each session is going to eventually be published on the TtW YouTube channel.

People are dying online

Though I was enthusiastic about TtW in principle, there’s always a degree of apprehension upon stepping into a new conference.  Thankfully, the very first session I attended put any unease I had to rest. “Small Data: Big Trends in the Little Ns” was a collection of four talks by investigators who decided that the best way to usefully study their subjects was by direct interview, observation, or analysis of data from very small cohorts.  The point here is not to gather statistically significant findings; rather, my read of the methodologies discussed by the speakers were aimed to allow for the development of a narrative around the particular social experiences being studied.

(I personally draw a connection between these sorts of inquiry and the use of inductive or intuitional reasoning of issues in mathematics or programming vs. the use of mathematical formalisms. Social narratives and inductive methods — at least in my experience — yield personal understanding, whereas statistical social inquiry and formal proofs often do not.)

Anyway: two of the presentations were particularly impactful for me, and both of them were centered on death and how expressions of grief manifest in online fora.  Molly Kalan focused on Facebook memorials (including pages of the deceased themselves, maintained and retained online as one might maintain a grave or shrine), while Timothy Recuber presented a curation of suicide “notes” left online.

The most intellectually fascinating part of Molly’s presentation for me was her discovery that there was an implicit social hierarchy that affected how people expressed their grief online.  That is, if someone “closer” to the deceased publishes a long, heartfelt eulogy, people that are less close (say, a work acquaintance vs. a close friend, or a cousin vs. the deceased’s mother) will try to calibrate their expressions of grief online to be shorter, less forceful, and essentially deferential to the closer party.  Even more interesting, when this hierarchy is violated (e.g. when a cousin publishes a more intense eulogy or memorial than a widow), both parties involved as well as third-party observers reported discomfort with the transgression.

Now, totally aside from the intellectual stimulation, the raw stuff of Molly’s presentation were examples of memorials from Facebook as well as individual stories of grief.  This was sobering and emotional, but that simply set the stage for Timothy’s study of online suicide “notes”.

Of course, once people were able to write things online and have them viewed by others, it was inevitable that such capabilities would be used by some to leave suicide notes rather than “classic” notepad, typewriter, and other printed methods.  But, I was unprepared for one of the centerpiece cases in Timothy’s study, that of Martin Manley’s suicide note.  Martin, an accomplished sportswriter and statistician, killed himself with a handgun on his 60th birthday. But, before doing so, he published his suicide “note”: a dedicated website containing dozens of pages of autobiographical detail, apologies and farewells to loved ones, extensive discourse on his rationale for suicide, and various social and political commentary.  Timothy’s presentation of Martin’s “note” took me totally by surprise; combined with the previous discussion about memorials, I had a very hard time keeping it together. At least I wasn’t the only one in the room that was so affected!

There’s a Mitch Albom quote on one page in Martin’s “note” that I think appropriately sums up the presentations on death and online memorializations:

Death ends a life, not a relationship.

While memorials have traditionally been fundamentally token reminders of loved ones lost, newer mediums make it possible for memorials to be ongoing, continuing relationships with the departed, in dynamic (and thus, sometimes uncanny and disturbing) ways. As such media are likely to become more and more sophisticated, it seems that the above quote will become less and less metaphorical as time passes.

I managed to snag Molly for a couple of minutes to ask her if the subjects in her interviews seemed aware of the potentially (inevitably) ephemeral nature of the memorials they were creating, given that Facebook is the steward of the data underlying those precious eulogies and memories. Her answer was that while some people did want to retain a personal copy of parts of loved ones’ memorials, basically none of her subjects appreciated the fact that those cherished artifacts were retained at Facebook’s discretion (and therefore, at the discretion of its investors, the market’s, etc).  To them, Facebook provides permanence.

This is particularly ironic given that Martin Manley’s note — the publishing of which he arranged through one of Yahoo’s services andpre-paid for for five years, in contrast to the free, advertising-supported model of Facebook — was taken down by Yahoo as violating their terms of service. I find this to be a unconscionable action coming from a company that, via its Tumblr unit, happily serves up hardcore porn, images and narratives of self-abuse, and virulent hate speech to all comers without restriction. Thankfully, Martin’s site’s content was captured by a number of different groups, and lives on at various mirrors, to which I have linked previously.

You don’t have to be “real” to have an impact

Later on the first day, I really enjoyed Molly Sauters‘ deconstruction of the saga of Amina Arraf, the purported author of the Gay Girl in Damascus blog, which rose to some acclaim and awareness as civil unrest in Syria accelerated in early 2011.  This, despite the fact that Amina was fictional; the author of the persona the blog was a male American student at the University of Edinburgh, who was found out only after writing on the blog (using a second fabricated persona) that Amina had been abducted by Syrian security personnel.  This prompted waves of online activism and a U.S. State Department investigation (because Amina was ostensibly a U.S. citizen living in Damascus).

Aside from the bizarre tale — and without going into the absurdities of straight white Western guys impersonating lesbian women living in the Middle East — Molly presented a fascinating characterization of the Amina persona as a digital “bridge figure”, a media presentation of “the other” (in this case, someone that is both gay and of Arab descent) that can establish a rapport with people in another context.  In particular, the claim is that Amina was perfectly positioned to appeal to Western, politically conscious liberals, which is borne out by the uproar among mainstream and social media calling for her release.

Of course, identity is a fluid and fragile thing, especially online, and hoaxes are not new. However, the story of Amina is notable as an example of “civic fiction” (a new term coined by Molly, apparently) in that, rather than actually being a bridge figure, she was really just a mirror.  Because she was fabricated by someone from the same context that she was created to appeal to, her story could only reflect the expectations, misgivings, and aspirations of her audience, i.e. those of an educated fundamentally Western woman struggling for democracy in a repressive regime.  This shared context is what ended up making the Amina story so appealing to Western journalists and media outlets, as her author was effectively trained by saturation in the media environment they constructed, and thus able to produce a narrative using the same tropes and patterns.

One of the last things Molly claimed was that even though Amina was a fabricated persona, she did “political work”.  I suspect that this term has particular semantics in political science or theory, but I interpreted it to mean that, despite the abhorrence of the impersonation (especially of someone in an “at-risk” or disenfranchised demographic), the fiction of Amina served its audience’s purpose in providing a bridge figure (despite her actually being a mirror), and was a genuine political actor (manifested in many ways, including by contributing to the narrative of the Syrian uprising and the coalescing of various activist communities when efforts to locate her after her arrest began).  The irony is that the power to do that work was granted by a media apparatus that blindly passed the Amina story along to the same audience from whence her creator came, instead of doing its purported job of mediating sources and identifying the fact (or, deception in this case) of the figure in question.

While reading further on this story, I came across a related, very pointed quote from Louise Carolin that I think is relevant in other social advocacy contexts:

[the first rule of being an ally is] don’t try to speak for the people you’re trying to support

The fungibility of identity was a recurring theme throughout the conference, and has been something I’ve bumped up against personally and professionally over and over. I still remember pining to play Alter Ego on my Commodore-64 (a game that offered the ability to simulate living another life, with different circumstances and decisions than your own), the impact that _why had (and still has) on friends of mine, and so on. I likewise remain fascinated by the Amina case study, how her persona was constructed, how the rest of the world largely accepted it until forced to do otherwise, and what all of this implies for identity on smaller stages and in different types of social spaces.

Stacks as States

On Saturday, Jay Springett gave a presentation that was framed with a question: why can Mark Zuckerberg call Barack Obama on the phone?  The punchline was that various “stacks” (i.e. sets of infrastructure that are housed within particular centrally-controlled, cross-national organizational silos, e.g. Facebook, Google, Twitter) are themselves sovereign, via exactly the same mechanisms true nation-states are.  While nations generally are defined by the lands and other natural resources they control, the network and computational sorts of “stacks” have equivalent corollaries in data, information architecture, computational infrastructure, citizenry in the form of dependent users, and effective international recognition on this basis as states themselves.  If you buy into this characterization, then it’s very clear why the executives of major centralized data and computation silos effectively have diplomatic relationships with heads of state: Obama doesn’t see Zuckerberg et al. as citizens of the United States, or even of the controlling parties of large corporations. They are peers.

Taken this way, revelations around the activities of the NSA and other Five Eyes agencies are even more plausible and “understandable” than they were before: these agencies also view these centralized silos as states (or, at the very least, as state-like actors), and thus the same signals intelligence activities that any spy agency might deploy against another nation is perfectly well applicable to any of the large “stacks”. That these silos happen to be incorporated and domiciled within U.S. borders may imply certain legal technicalities, but such things don’t change the fundamental ways in which these organizations are viewed and related to by “real” nations, and thus the calculus used by intelligence agency personnel when making operational decisions.

Jay provided a very compelling presentation as to the plausibility of this sort of characterization of the geopolitical nature of the centralized sources of information and computation that we rely upon today.  Additional resources he cites as good ways to further understand the political nature of infrastructure include Roads to Power and Seeing like a State.

I was personally very glad to have stumbled across Jay’s presentation, as it provided a set of starting points — and most importantly, a related vocabulary — towards a developing a deeper set of questions around the controlling structures around these stacks of infrastructure, and what individuals and communities can do to assert their own agency with regard to their own infrastructure.  Jay’s own #stacktivism concept (an activism-focused term and set of resources for starting to understand the social implications of infrastructure) is one of these.

In closing

A number of thought-bombs were lobbed in the closing “keynote” panel on “Race and Social Media”.  One of the first came early from Lisa Nakamura:

While social content spaces are often intended as socially agnostic, they never are in practice.

I believe this is a specialization of the distinction I drew above between technical and non-technical concerns.  When we build communications software, tools, and platforms, many of the features we consider and implement aren’t simply “cool” and useful, equally accessible and positive for all of the parties that are affected by them: they often define the shape of conversations and interactions, in ways big and small, obvious and subtextual. Easy examples include things like the ability to publicly tag others in photos, and even the simple option to declare one’s relationship status (chosen from a predefined set of possible types of relationships).

The most powerful shared moment of the entire conference in my opinion (despite my tearing up earlier the prior day) was at the end of the keynote, when Latoya Peterson gave a very compelling summation of the grounding roles of empathy and tolerance in a connected global culture:

It is a very very large world, and we need to exist in it together. What matters is that you love and trust people enough to want to be in the same space with them.  I don’t know shit about trans history or trans rights, but I love Mattie Bryce and I love Naomi Clark and I love all the fabulous trans game designers that I have met through being a gamer, and I want a world with them in it.  That is it, I don’t need to know anything else.  I need to understand what they need to be happy and what they need to be free, and then we work towards that together.

This got a thundering standing ovation from the audience, and rightly so.  Carrying that basic sentiment/intuition through each of our interactions and relationships with people with whom we have little shared context would surely make for a better world.

Fin

There’s so much more at TtW that fascinated, inspired, and touched me; I could go on and on, but the morning grows brighter, and work beckons.

So far, I have a couple of “takeaways” from TtW that are directly relevant to my own work and context.  These are not so much things that I learned directly, but perhaps some suspicions and half-thoughts that the conference either confirmed (which worries me re: bias, now looking for contrary indicators!) or helped me to conjugate more fully:

  • Likely as a convenient shorthand, people often talk about the extremes of particular aspects of system design: centralized and decentralized; ad-supported and user-paid; point-to-point and aggregated; tolerant and intolerant; anonymity and authenticated identity.  I am more and more convinced that these are extremes within a multidimensional space that must be large and complex enough to encompass all of human endeavour.
  • A common thread through many presentations was an acknowledgement of the power that things like Twitter, Facebook, Tumblr, etc. have in terms of shaping how people work with information and construct or choose the cultures of which they are a part.  More generally, ideology and technology inform and define each other; to remix Lisa Nakamura’s assertion regarding social content spaces, technology is often intended to be ideologically agnostic, but never is in practice.
  • Despite this awareness, there was an implicit (and sometimes very explicit) fatalism at TtW regarding the centralization of that power, that the economics and politics behind the current generation of networked social services are so powerful that the models they’ve developed are not only the only ones that will ever be considered viable, but that perhaps we have passed a point where opting out of such models will be considered socially, economically, or even legally unacceptable.  As someone that is very interested in building things that enable the disaggregation of computing systems and communication platforms (again, see the Quilt Project), I found this fatalism somewhat disheartening (though perhaps not without merit, esp. given the case compellingly made by Jay Springett that I mentioned earlier).Ironically, it may be exactly that fatalism among those that have thought most deeply about the issues around the centralization of power and infrastructure that may make addressing their root causes more difficult.
  • While thinking about these sorts of things while walking around Brooklyn in between sessions, I tweeted this:williamsburg-tweet-1
    Screenshot from 2014-04-29 10:38:59

I hope you enjoyed my write-up of Theorizing the Web, 2014. Maybe I’ll see some of you there, next year.

Posted in Quilt | Leave a comment

Buy a signed copy of ‘Clojure Programming’, help the EFF!

Update, 2014 Aug 29: Unfortunately, this fundraising effort did not pan out.  Only one individual reached out with a “pledge” for the EFF; rather than go through the rigmarole of collecting a single donation, I simply took him at his word that he donated the pledged amount, and set him up with a signed copy of Clojure Programming. Yes, this means I still have a pile of the books sitting idle.

Through a strange set of circumstances, I recently came into a decent-sized cache of copies of Clojure Programming (which I co-authored, in case the blatant self-promotion to the right didn’t tip you off already).  The only other likely option for them was to lay disused for some years, and then end up in the bin; so, I bought them (at a steep discount) from the prior owner, not quite knowing what I’d do with them.

If I didn’t live in the hinterlands, I’d just drop them off at the nearest gathering of Clojure enthusiasts; but, lugging boxes of books hours away to Boston or New York had little appeal.  My next thought was to simply sell them, but the economics are just crummy, given the PITA logistics of handling payments, boxing, and shipping for one book at a time, all for something like $15 or so net. (Yes, sorry, I’ll file this under #firstworldproblems.)

But, a better idea came eventually, one that I hope enough of you will cotton to to make an impact:

Buy a signed copy of Clojure Programming; all proceeds go to the Electronic Frontier Foundation

Surely you’re familiar with the EFF already; if not, you should be.

The “rules” here are very simple; basically, this is a first-price sealed-bid auction:

  1. Send me an email indicating how much you would like to donate to the EFF, $100 minimum, exclusive of the shipping necessary to get the book to you.
  2. The top 5 pledges received before March 5th, 2014 will receive a new copy of Clojure Programming, signed by me (FWIW, etc).  I’ll write whatever else you’d like in terms of a salutation, personal message, etc.
  3. The cumulative amount pledged will be donated to the EFF, in care of those that made those top 5 pledges.

If it’s not completely clear, I’ll not be keeping a single cent of the pledged amounts; everything will be going to the EFF.  In fact, I’d like to use Dwolla if possible for all donations, which would effectively eliminate transaction fees; I’m pretty sure the EFF can do more with that 3% than Visa, Mastercard, and Paypal.

Sound good?  Pledge away, and feel free to ask any questions here or via Twitter.

Posted in Clojure, Clojure Programming (book) | Tagged , | Leave a comment

Results of the 2013 State of Clojure & ClojureScript survey

Two weeks ago, I opened up this year’s Clojure & ClojureScript survey.  Now, you get to see what everyone said, and we all get to speculate on what it means.

First, some process details:

  • The survey was open for ~7 days, from Nov. 5th – Nov. 12th.
  • I announced the survey here, on my personal Twitter feed, and via a total of four messages to the main Clojure & ClojureScript mailing lists.
  • 1,061 responses were received.

One immediately-surprising thing is that fewer people participated than last year; not by a staggering amount, but enough to make me pause.  Of course, there’s a ton of uninteresting reasons why that might be the case: the time of year the survey was offered, the new collection method being used (PollDaddy vs. a Google Docs form last year), the size of the survey (23 questions vs. 13 last year), and the peer presence of ClojureScript in the survey (which accounts for the increase in question count).  Of course, all of this comes with the caveat that I’m far from a professional pollster, so there are tons of factors that I’m surely unaware of that would impact participation (maybe some that I’m responsible / to blame for).

(Late note: people have mentioned that, in contrast to prior surveys, this year’s did not get any significant placement on Hacker News et al.  That may further explain the drop-off in responses.)

Per-question tl;dr

What follows is a super-brief summary of the results, with my own opinions interspersed.

Do you use Clojure, ClojureScript, or both?

The community is effectively split between those that use only Clojure, and those that use Clojure and ClojureScript.  Only 7% of respondents use only ClojureScript.

To me, this implies that ClojureScript has not (yet?) attracted a population looking to use it, and it only (as opposed to other languages that target JavaScript, e.g. CoffeeScript, that are used in conjunction with a diverse set of backend languages / architectures as appropriate).  A rosy speculation might be that those coming to ClojureScript from other languages are so enamoured of the language and its model that they also adopt Clojure, but it’s probably more realistic to say that most ClojureScript usage is simply a result of Clojure developers wanting to target JavaScript environments with a minimum of language and data model pain.

In which domains are you applying Clojure and/or ClojureScript?

In a few words, web development, open source, math and data analysis, and building commercial services and products using databases.  There aren’t a lot of surprises here.

Notable is that ~20% of respondents are using Datomic in some capacity.

The relative distribution of the domains really hasn’t changed since I started running the survey.  At this point, the only utility of this question is to limit one’s view of the rest of the results in terms of what people are working on.

Order these aspects of Clojure/ClojureScript to reflect how useful each have been to you and your projects…

It’s no surprise that functional programming, REPLs, immutability, “ease of development” (an ill-defined aspect to include in the list, I now realize), and host interop are hugely impactful; these are core to Clojure’s rationale.

On the other end of the spectrum, metadata and newer features like reducers and tagged reader literals are comparatively not getting a lot of love.  This is actually a bit unfair, especially with regard to metadata, but the stack-ranking nature of the question means that some useful bits were guaranteed to be lingering at the bottom.

What is your *primary* Clojure/ClojureScript development environment?

This question changed from prior years (people could choose only one development environment this time around), so comparisons with prior years would be inappropriate.

  • Emacs is preferred by over half of respondents, and nearly all of them are using cider/nrepl.el now; effectively no one is using SLIME or slimv anymore.
  • Likewise, vim users have largely moved over to vim-fireplace.
  • Light Table is the third most preferred development environment, just beating out Counterclockwise.
  • In a very short period of time, Cursive Clojure has overtaken the usage of the La Clojure IntelliJ plugin.

There are now a number of Clojure/ClojureScript implementations targeting runtimes aside from the JVM and JavaScript. To what degree are you aware of or using each of these implementations?

Very few people are using these alternative implementations, though up to 20% of respondents are evaluating at least one; the “winner” there is clojurec, which makes sense given some of the most popular complaints around Clojure, i.e. its general unsuitability for command-line utility development and the difficulty of using native libraries from within the JVM.

Re: Clojure

How would you characterize your use of Clojure today?

More than half of respondents are using Clojure at work, which is a big jump compared to the last two years, when only a third of respondents were so lucky.  This is nothing but good.

What version of the JRE/JDK do you target?

We are generally early adopters, and that carries over to JDK selection: 75% of respondents are using 1.7, 5% are on some pre-release of JDK 1.8.  JDK 5 is dead now, and JDK 6 is on its way.

What tools do you use to compile/package/deploy/release your Clojure projects?

This is another question where I changed the survey to force people to make one selection, so comparisons across years don’t make sense.  Leiningen dominates the Clojure project management / build space.

Name *one* language feature you would like to see added to Clojure.

This was a free-form question, and I’m not going to attempt to summarize the responses. I’d encourage everyone to click through to the full survey results and look at some of these yourselves (maybe filtered to focus on the “types” of Clojure programmers / usage you are interested in).

Update: Alex Miller has summarized this question’s responses into a ranked set of categories of features.  This is a great way to get an overall impression of what’s top-of-mind among participants without weeding through the raw responses yourself.  Thanks, Alex!

In general, are the following statements true when applied to the Clojure libraries available within your domain(s)?

This question attempted to gauge general sentiment with regard to library quality on a couple of different criteria.  In short, Clojure libraries are easy to find, their maintainers are receptive to feedback and patches, they are technically of high quality, but they’re not always very well-documented.  None of that is surprising or particularly different from last year.

What has been most frustrating for you in your use of Clojure; or, what has kept you from using Clojure more than you do now?

Clojure is not generally suitable for building command-line tools, though people wish they could use it for that (perhaps ClojureScript and/or something like clojurec will help here over time).  As in prior years, people remain unsatisfied with documentation and development environments.

“Staffing concerns” is new as the #2 reported frustration, which I think is significant: I certainly know that companies using Clojure have a hard time filling openings lately.  Perhaps this is correlated with (maybe) increased commercial use of Clojure, given many more people using Clojure at work this year; it may also be a reflection of current/recent macroeconomics.

What do you think is Clojure’s most glaring weakness / blind spot / problem?

Again, you should go look at the raw results to see what people highlighted as problematic areas in Clojure.  I will call out this response though, as it was both at the top of the list of responses for this question, and made me chuckle:

“Where are the docs?”

“Read the (tests|docstrings).”

“Did you just tell me to go fuck myself?”

“I believe I did, Bob.”

(Apologies to @jrecursive.)

Re: ClojureScript

How long have you been using ClojureScript?

ClojureScript is very new, and in general, all ClojureScript programmers are new, too: effectively all of us have been at it for a year or less.

How would you characterize your use of ClojureScript today?

Despite ClojureScript’s youth, over 25% of its users are using it at work.  I find that amazing, and is a testament to the relative maturity of the language, i.e. it started off benefiting from the lessons that were learned in building Clojure and bringing it to the point that it’s at today.  Of course, that doesn’t yet reflect in the maturity of ClojureScript’s implementation.  Things there are getting better fast, and will continue to do so.

Everyone else is basically just tinkering, which makes total sense right now.

Which JavaScript environments do you target?

Browsers dominate here, which makes sense.  However, people target a bunch of other environments with ClojureScript too, from node.js to app containers to plv8 to Riak’s mapreduce.  Basically, if it can run JavaScript, I’m guessing someone is going to target it with ClojureScript.

The tricky thing about this is what this implies for those of us targeting more than one environment with our Clojure[Script]ing: while 95% of a codebase may be perfectly portable among all sorts of runtimes one can target from Clojure and ClojureScript, many of the options available for dealing with that last 5% are suboptimal, especially insofar as adding an additional level of indirection to represent runtime differences conflicts with the typical objective of that 5%, namely performance-sensitive bits that need to touch runtime-specific / interop facilities.  Fixing / working around this has been a defcon 3 yak for me for some time, thus cljx…maybe it’ll help you out if you’re contending with the same issues.

Which tools do you use to compile/package/deploy/release your ClojureScript projects?

Just like with Clojure, Leiningen (with lein-cljsbuild) is the overwhelming favourite for ClojureScript build/project management.  This is no surprise given huge overlap between those using ClojureScript and those using Clojure.

Which ClojureScript REPL do you use most often?

The results here are very surprising, and sad to me: more than a quarter of ClojureScript developers don’t use a REPL at all.  Looking at the responses around what can be improved about ClojureScript, the ease of use of existing REPL options is easily the #1 reported problem, and is clearly preventing people from using what is reported more generally as one of the most useful parts of Clojure/ClojureScript.  This is absolutely something that is hurting ClojureScript adoption, especially insofar as one of the first things people reach for when language tinkering is the REPL and ways to make it work with one’s other tools.

I’m doing what I can to help this problem by working on Austin, an alternative project- and browser-REPL implementation that aims to make ClojureScript REPL-ing as easy and as indispensable as it is in Clojure.  If you’ve had a hard time getting ClojureScript’s browser REPL going reliably, or would like to not depend on e.g. Rhino for project-connected REPLs, take a look; it’s far from done, but what’s there is easier to use and somewhat more flexible than the baseline.

Name *one* language feature you would like to see added to ClojureScript.

Just like with the Clojure analogue, this was a free-form question that I’m not going to comment on here. Click through to the full survey results and look at some of these yourselves.

What has been most frustrating for you in your use of ClojureScript; or, what has kept you from using ClojureScript more than you do now?

As mentioned before, difficulty using REPLs is the biggest deal here, followed by “difficulty debugging generated JavaScript” (something that I know has been a big focus of David Nolen & co. with source maps and such).  Other big problems include using JavaScript libraries (something I’m queued to improve), followed by the eternal frustration around documentation, and my personal favourite, “coping with JavaScript”.

That last one is not something we can do much about very quickly in the ClojureScript community, but it’s definitely one I share; having relied on the JVM and JDK for the last 15 years, it can sometimes be incredibly frustrating to have to work around the shortcomings of the JavaScript runtime environment, language problems, and holes and misfeatures in the standard library.  Hopefully we’ll be saved eventually by some potential improvements coming from the ECMAScript standardization/evolution process.  In the meantime, perhaps there are some clever things we can do to blunt the pain a bit.

What do you think is ClojureScript’s most glaring weakness / blind spot / problem?

Here is where you can leave any general comments or opinions you have…

The last two questions both collected free-form responses, only the first of which is ClojureScript-specific.  The second is always my favourite part of the survey, where people can write anything they want about the survey, our languages, or our community; it’s overwhelmingly positive, and reminds me why I’m proud to be part of all of this.  The only one I’ll quote that appears a bunch:

Thanks for the survey Chas!

My pleasure, thank you for participating!

Survey data access

Complete access to the survey reports (with a pretty snazzy filtering and drilldown UI) as well as all the raw data are available here; the password for that shared view is “HvmqkIS3dx3″.

Posted in Clojure | 2 Comments

2013 State of Clojure & ClojureScript Survey

This is the fourth year in which I’ve offered up a “State of Clojure” survey.  As before, my intention is to get a sense for the size and overall level of activity in the Clojure community, and to capture a certain zeitgeist of the community’s mood, practices, concerns, and so on.  If you’re interested in history, you can see the results of all prior surveys:

The big change this year is that questions related to ClojureScript are included such that it roughly shares the limelight with Clojure.  In part, that’s selfish; I started using and contributing to ClojureScript seriously over the last 18 months, so I’m interested to have some perspective on what other users care about.  More importantly, there’s no denying that ClojureScript has matured significantly over that period, and has attracted a great deal more attention. Especially given its potential, it’s time it took its proper place in the survey as a peer implementation of the Clojure Principles we’ve all come to appreciate.

Results from the 2013 State of Clojure & ClojureScript survey are here.

This year, I’ve moved to hosting the survey with PollDaddy, which has graciously set me up with a ‘pro’ account; this will allow us to get far more advanced analysis out of the results than were ever possible with the primitive Google Docs form I’d used up until now.

As before, I’ll follow up with the results, some charts and graphs, and some sad attempts at witty commentary.  Of course, all of the raw data will be available as well.

Finally, please do what you can to spread around this survey to those that you know of that are working with Clojure and/or ClojureScript.

Posted in Clojure | 6 Comments

My Mom has Multiple Sclerosis

She’s likely had it since at least ~1983; the first sign was a temporary bout of optic neuritis that left her half-blind for a month or so.  All was well for years, until 1996: we were walking out of the DMV — we went so I could get my driver’s permit — and she suddenly stopped walking in the parking lot…and couldn’t move any further.  After a minute or two of my asking what was wrong, and her looking a bit panicked and not knowing what to do, her legs started working again.  Turns out, I had gotten my driver’s permit just in time; I drove home that afternoon.

In the years since, a lot has happened: diagnoses (“secondary progressive multiple sclerosis”), various attempted treatments, the walker, then the wheelchair, then full quadriplegia.  My father, my wife, and a cadre of part-time nurses (thanks, everyone, for your contributions to Medicare/Medicaid/MassHealth) have helped her remain as independent as someone in her position can be.

No doubt, it’s been hard on everyone involved, but I think we’ve done okay, and made the most of a genuinely shitty situation.  Of course, it’s been hardest on my Mom.  I can’t imagine what it’d be like to be in her position (though believe me, I’ve tried), but — understandable dips in the road aside — she has remained remarkably upbeat, engaged, rational, and hilariously concerned with others’ well-being.  Courageous, that’s what she is, and I’m so proud of her for it.

But, I write all this not to share my perspective, or tell my story, but to suggest that you should listen to her tell hers: she recently started a blog, Living with Advanced Multiple Sclerosis.  If you’ve gotten this far in this post, you should go read it.

One thing that did decline as her disease progressed was her interest in her most fervent passion in life, literature and writing.  She was an English major in college, and along with helping me become the language pedant I am today, she helped cultivate my love of rhetoric and the written word.  While it was sad to see her stop reading and writing for so long (it can be hard to keep up with it when neither your arms or eyes work well, or at all), it made seeing her start to write again that much more joyful.

So, check out her blog, and pass it along to anyone that you think might appreciate it.  I know she’d love to know that other people are following along, especially if they are also on a journey with a degenerative neurological disease like Multiple Sclerosis, Parkinson’s, Muscular Dystrophy, and so on.  It’s no Shakespeare (something she’d readily admit; since her nurses type for her, she can’t meticulously edit and fret over every word for hours like she once did).  No, it’s better: her unvarnished voice talking about life, something I love seeing on the “page” again.

Posted in Uncategorized | 1 Comment

[ANN] @IMHO

Update: I’ve shut down IMHO. It was amusing while it lasted, but it’s clearly not a domain in which I have a fundamental interest. I hope you all enjoyed playing with it while it was among us!

I don’t write stupid Twitter apps often…but when I do, they’re really stupid.

— Me, wearing a smoking jacket

I’ve always enjoyed Twitter, but I’ve never built an app of any kind around it, or done anything with Twitter data.  Insofar as that’s roughly the modern equivalent of a ‘hello world’ program, I was perhaps lacking in some critical way.

Well, no longer.  I, too, have built a Stupid Twitter App™: go check out IMHO (or its companion Twitter handle, @IMHO)!

logo-med-darkWhile Twitter has many roles — some quite important — everyone uses it as a dumping ground for their opinions.  IMHO (a common Internet colloquialism meaning “in my humble opinion”) will maybe provide an entertaining and perhaps informative view of those opinions, in aggregate.  I’ve seeded the site with just short of 2 million opinions culled from an archive of 90 million tweets; but, from here on out, new opinions will only be added if they are tweeted at @IMHO, like so:

.@IMHO The Celtics will get revenge next year!

Through a combination of some crude natural language processing and a lot of hard-working squirrels, opinions are parsed and indexed by their subject/topic.  Oh, but leave your nuanced stances at the door: only simpler, discrete assertions will be recognized.  Madness?  No, this is Twitter!

Right now, only reverse-chronological listings of opinions are available (far more interesting visualizations are of course close at hand); perhaps I’m just that puerile, but I’ve found even that simplistic view entertaining enough for now.  Check out some popular/contentious topics:

There’s plenty of snort-worthy gems in there, if you care to go fishing.  Again, some better UI will make it easier to surface them, which will be added if the service catches on in any way.  So, if you have an opinion you’re going to put on Twitter, put a .@IMHO on it.

Posted in Announcements | 1 Comment

100% time

Perhaps you’ve heard of “20% time”. In many ways, it or something like it are table stakes for many software folk, and perhaps other creative specialists as well; i.e. if a company doesn’t offer something like 20% time, it may have a hard time attracting top talent, and could end up suffering by not profiting from the results of the outside-the-box sorts of ideas and work that emerge from people’s 20% time.  Some organizations — Valve comes to mind as the most prominent — even make it policy that staff are to use the “law of two feet” to self-organize, with the theory that more impactful work will emerge from the resulting economics.

I relate to this insofar as I’ve been lucky to have wandered into having what I call, tongue-in-cheek, “100% time”.  Most discussions of 20% time seem to characterize the mix as being 80% slog, 20% “freedom”. In contrast, “100% time” is a mix of near-complete professional and personal freedom where I can be available to every opportunity that comes my way.  Whether related to new business, following new creative inspirations (in programming, web or graphic design, or writing), pursuing scholarly interests, keeping myself healthy, traveling at length, enjoying the company of and taking care of family, volunteering for causes I believe in, or slacking to recharge, 100% time means that I choose what to care about, and then dedicate all my energy to making that choice have impact.  Having had the opportunity to live like this and becoming acutely aware of it, I’m nearly certain I won’t be able to “go back” without a fight.

I don’t write this to brag.  More than anything else, if 100% time seems out of your reach, I hope to be some proof that it’s not.

There’s nothing in my past to suggest that I should be where I am, doing what I am doing: no family money, no name-brand school (or diploma, for that matter), no powerful connections.  In fact, I came very, very close to getting stuck in a “regular” job ten or eleven years ago, after sinking $140K in debt trying to start the first incarnation of Snowtide.  I thought that had been my one “shot”: after failing dismally and being forced to take any work I could get (initially landing in a hyper-dysfunctional office run by maniacal Russians…), I figured that my life’s trajectory was fixed.  I know I shouldn’t lament a “professional” career with stable companies — it was far more than I had any right to expect — but, perhaps irrationally, I wanted to be king, with direct control over my life and my future.

Thankfully, I’m either too stubborn or too stupid to give up.  I extracted PDFTextStream from the smoldering ashes of my wayward startup, and managed to build a small but reliable business serving fabulous customers in a technically-challenging niche.  Somewhere along the way, I discovered that the biggest benefit of entrepreneurship was not money (as many have said, far safer routes to much larger piles of cash exist elsewhere; go be a banker or management consultant if that’s where your objectives lie), but time, and the freedom that comes with it.  Once my livelihood and income were decoupled from the time I had to dedicate to earn it, I felt like I finally understood the concept of opportunity cost and the aphorism of spending time: you will exist for only a finite duration, and you’d best ensure that that precious capital is used wisely to build the most value possible.

How you personally define “value” is where all the fun and challenge comes from.  Build bigger, better, more beautiful things; learn to make music and art and drama; inspire an empire and then go save the world; love friends and family and neighbors and strangers.  Do what you can to have the opportunity to make those choices yourself, so you can be the best person you can be, and make the most of the time you’ve been allotted.

Posted in Entrepreneurship | 12 Comments

Cargo-culting to success and understanding

In doing my part to further the fortune-cookie bullshit cycle that is Twitter, I tossed out this nugget yesterday:

Little did I know that it would spur such conversation, debate, DMs, emails, and so on.  I was going to expound on the original tweet in eight parts, but thought better of it.

Cargo culting is an actual thing, of course.  Thanks to Justin Sheehy for this f’rinstince:

He rightly pointed out that real cargo culting is all about mimicking the “trappings of others, not their essential behaviour”.

Mimicking triviality certainly happens — witness the fads that drift in and out of tech culture around the most trivial things, e.g. the super-minimal desk/workspace, actually using a hammock for “hammock time”, the revolutions in agile/xp/lean/scrum/etc. nomenclature, and so on — but I don’t think that’s what most people mean by “cargo culting”, at least when it comes to software, programming, and related design.

By “cargo-culting”, I was generally referring to doing something without properly knowing why it might be good or bad.  In most cases, doing this is a recipe for disaster, especially if the person doing it is of the incurious sort that is looking for permanent shortcuts.  “Keep adding indexes if your query is slow”, “go look in the ‘pattern’ book when trying to design something”, “use Mongo if you need to be web-scale”.

(The dangerous bit is that we all occasionally cargo-cult things, implicitly, simply because we are social creatures, and we’re inevitably influenced by certain patterns and design philosophies and technical approaches being baked into the fabric of our time and place in the world.  My sense is that many discontinuous innovations can be strongly correlated with someone becoming explicitly aware of these undercurrents, rationally re-evaluating their bases, and offering an alternative better suited to modern times and uses.)

What I was tweeting about is a different thing, though.

Especially when I’m groping into a new domain, I often ape others’ work with abandon, and with only a dim view of the ‘why’ of the motivating design problems and decisions.  Doing so produces technical, design, and understanding debt, but I take it on willingly.  Making constant progress is often more important and more useful to me to than methodically building a formal understanding of the theory or practice related to my current task.  As I go along in the work, I continually look for ways to understand those decisions I previously adopted wholesale.  Because of the bias towards constant progress, I generally have the benefit of having a working system I built in front of me, so I have a tangible sense of the impact of those decisions.  I then can carry on having understood the original ‘why’ motivating them; and, if I’m feeling adventurous, I can use that understanding to usefully re-evaluate those decisions, and maybe make different ones to yield a better result.

Maybe I’m being too loose with the terminology, but the first part of this process certainly sounds like “cargo-culting” in software to me.  The difference is that:

  1. I explicitly acknowledge that I’m taking a shortcut, with the distinct intention of returning to the topic later.  (Not doing so would be an abject failure on my part.)  This is the “first approximation” part of the notion: the shortcut is a bootstrapping mechanism, not a final destination.
  2. I am extremely selective when it comes to whose work I’ll choose to look at seriously.  Code pasted into a Stack Overflow answer doesn’t qualify, nor does whatever is found in most popular “technical” books.  Libraries and systems built by people who have spent decades, careers working on problems similar to those I’m facing? Getting closer; but even given that sort of “population”, care must be taken to match up the matrix of requirements as closely as possible.  e.g. if I’m in the neighborhood of distributed systems, taking hints from someone focused on embedded applications may be fraught with peril.

I’ve never been able to read a dissertation or book or three, and *foom*, produce out of whole cloth something well-designed, efficient, and extensible — but I’ve seen others do just that.  So, I know that what I’ve discussed here is an inefficient path, at least in certain domains and for certain people.  I think it is a natural response to attempting to build nontrivial things given fifty-ish years of software and engineering history to consider.

Finally, the terminology.  Perhaps given this notion of historical artifact, a better phrase than “cargo culting” might be “anthropological reconstructive software archaeology”?  Doesn’t quite have the same ring to it though, eh?

Posted in Craftsmanship | 7 Comments

Mostly Lazy, back in the saddle

I was stoked to reboot Mostly Lazy by talking yesterday with Chris Houser (a.k.a. Chouser), this time via Skype.  It’s good to be back, so go check out the latest episode, maybe follow the @MostlyLazy twitter feed (who knows, it might not suck), and suggest some future topics, questions, etc.

Posted in Clojure | Leave a comment