Feedback on the U.S. Open Government National Action Plan
The White House is once again asking for feedback on its Open Government Initiatives. As you may recall, we made several recommendations on comment-taking and data.gov the last time they asked for feedback, and we're going to do the same today.
Specifically, we're responding to this request for input, asking for input around practices and metrics for measuring public participation. Specifically, they're asking about how they should be measuring the outputs of the open government directive.
It's a good question, and personally I'm glad they're open to public input here. The history of the Open Government Directive is also littered with bizarre and strange metrics for openness: a metric like "high-value datasets" -- with no tangible definition of what “high-value” actually means -- leads to disappointing results from federal agencies on their open government data releases. A metric like "number of datasets released on data.gov" leads to agencies cutting up datasets into tiny little chunks on data.gov, making the federal data repository less accessible to developers than it could be.
So the next phase of open government is to get down to brass tacks. We now have enough experience in the field to come up with real metrics that incentivize success and the ideals held inside of the open government directive. It's great that the White House is asking these questions. We'll go through each of their questions and answer them as best we can.
What are the appropriate measures for tracking and evaluating participation efforts in agency Open Government Plans?
Short answer: draw attention to success, not compliance.
There's some history here -- when the Open Government Directive launched, the White House came up with a checklist for agencies to abide by. The problem is, while the checklist provided clear goals in the short term, over a longer term the checklist wasn't too valuable because the metrics behind them weren't very meaningful. Agencies have generally settled upon meeting the minimal requirements for each checkbox without actively thinking about the motivations behind the Open Government Directive in the first place. Worse, the list itself isn’t very dynamic -- once a box is checked, it’s checked forever and that agency never has motivation to revisit the goals again.
We think a better way is to highlight success. We built the Federal Social Media Index with this idea in mind. Agencies that have the most engagement online via Twitter (as we measure engagement) get highlighted. We’ve clearly defined success as asking questions from the public and receiving answers -- providing far more meaningful metrics for engagement than the sheer number of followers. In just two weeks since its launch, we’ve already found agencies competing for who can seek the most input from citizens to reach the top of the Index every week. And as we’d expect, agencies at the bottom of the Index are working hard to move their way up.
What to Measure
The new initiative's objectives lend themselves to intrinsically better metrics. For instance: * An overhaul of the public participation interface on Regulations.gov could bring with it a measurement of how many comments on regulation make it to the record. * A new regulations.gov ought to be measured by tracking the average comments per regulation per agency now, and taking the same measurement after the new website is released.
In addition to these example metrics, ranking the agencies on how many comments they get per regulation may be a useful measure of success as well. While some agencies may get fewer comments than others naturally (fewer people, for instance, are interested in the goings-on of the Administrative Conference of the United States than the EPA), having a clear metric that incentivizes agencies to maximize participation is a good thing for open government.
Just as important as what we track and measure is what shouldn’t be used as a metric. These new initiatives ought toavoid reporting numbers by themselves. In the world of transparency, the raw numbers alone don't matter, as has been evidenced by the Recovery.gov initiative. Though Chairman Devaney envisioned a world of "Citizen IGs" to inspect the data reported by the website, there has yet to be a single enforcement action based on the reports from ordinary citizens based on the data. That's because it's difficult for a citizen to understand the data in context: If the government spends $4 million building a road, is that good? Bad? Normal? Abnormal? It’s very difficult to know.
By contrast, take a look at ChicagoLobbyists.org as an example of how data can be put into context. Instead of simply listing numbers on a page, you're easily able to see who is spending what on lobbying and relate it to other expenditures. We know now, by looking at this website, that the Department of Zoning and Land Use Policy in Chicago is the biggest target for lobbyists, and may warrant more citizen oversight than the Mayor's Office of Special Events.
In addition to measuring the right things, measurements of success need to be dynamic, updated so they’re alive and relevant. Metrics ought to be published and shared at regular intervals. If, for instance, a metric is "compliance" or "non-compliance" you really only have to check that box one time and there is no competition. Today, simple checkbox-based compliance charts are the equivalent of a record that you showed up to the marathon, not that you ran it.
What should be the minimum standard of good participation?
Full compliance should be the minimum standard of good participation. Obviously if an agency is not complying with the plan, then they're not acting in good-faith. The important thing is what we said earlier: compliance means an agency simply showed up, which is the bare minimum that we should expect. Now that agencies have been marshaled to the starting-line, it's time to measure how well they're running the race.
What are the most effective forms of technology and web tools to encourage public participation?
This is a question that we're passionate about. We believe that the Federal Government cannot participate online in a meaningful way until it starts accepting comments through the web that can go then go on the record. In short: you can throw all the technology you want at this problem, but until it's lawful to accept a Facebook or Twitter comment as a regulatory comment -- one that can be admitted to the official record -- meaningful public participation cannot exist.
The reason is simple: Citizens rightfully expect to be able to give feedback to their government. It shouldn't be the case that the government can use a medium like Facebook or Twitter to issue press releases but not take public feedback. It's an unfair burden on the citizen for them to know that the government is talking to them through one medium but can only listen to them in an official capacity through another.
If the government can talk to us on the Internet, then it has the obligation to listen to us on the Internet.
We believe that government ought to be participating with people wherever they are, especially through social media platforms like Facebook, Twitter, and Google+. And we believe that an online identity through one of these platforms is not just equal, but superior in many ways as a form of identification as compared to the address and zip code currently used to verify eligibility requirements in official records.
In order for government to meaningfully participate through those platforms, it needs to make significant time investments in listening technologies, like Expert Labs' own ThinkUp. While the burden may seem high to listen to a million voices, talking to people through these channels will soon become default, expected behavior. The Federal Government must prepare for this by building the foundation to handle this new level of input now. And given our eager willingness at Expert Labs to provide both the technology and the know-how for this type of engagement free of cost, and without the burden of a procurement process, we’re hoping that many of the excuses agencies use to drag their feet will be obsoleted.
Right Tool, Right Job
It’s fundamental that the Federal Government understand the difference between brainstorming and dialog. For its National Dialog on Federal Web Policy, for instance, the federal government brought in experts from the field to discuss the future of government on the web. While we commend the government for doing this, we think IdeaScale is more well suited for soliciting ideas (a brainstorm) from a wide swath of people than it is for conducting a dialog that can provide actionable expert advice to government.
It’s a familiar point, but it bears repeating: the most popular ideas are not always the best ideas. What’s more, the synthesis of ideas requires different tools than the simple up and down votes that a tool like IdeaScale provides. An interesting tool for bringing out expert advice may be http://atroundtable.com/ -- which provides a way for experts to engage in meaningful conversations. Adopting tools like this will help you glean meaningful insights into what people are saying.
Finally, we think that engaging with small, independent software vendors for the building of software may be a useful thing for government to do. For instance, services like Ducksboard.com and Geckoboard.com provide ways to build dashboard for a fraction ($9/mo vs. hundreds of thousands of dollars) of the cost of typical federal processes for building this functionality today. Investing in these tools makes shared resources more readily available for other agencies. And many of these vendors can also help government get to market faster because they meet purchase card requirements and don't require separate procurements.
There are dozens of small tools like this that can have meaningful impact on how the federal government operates, while saving significant taxpayer dollars.
What are the most effective strategies for ensuring that participation is well informed?
This is a tricky question for government because it can seem partisan. One person's ignorance is another person's disagreement. This is why we recommend the use of social media for public comment; While you may not guarantee uniformly good public comment procedures, by building a new platform on top of the tools we have available and making the filtering choices for those comments visible, we can extract good comments in a process that people will accept, even if they grumble in disagreement.
For example, if government embraced social media platforms as a method for public dialog, tools could be built to extract out subject matter experts from all of the comments. It’s straightforward, for instance, to pull out all of the residents of Nebraska for a comment regarding the Keystone pipeline, to see what Nebraskans have to say about it. Likewise, it would be possible to see what T-Mobile employees say about the T-Mobile/AT&T merger by using the Facebook API.
Government is simply not in a position to ignore a wide swath of its citizens because it deems them poorly-informed. Even the poorly-informed have a right to have their voices heard. But strategic investments in technology can help the best public voices be better heard.
The Biggest Recommendation
Ultimately, the most effective strategy for encouraging meaningful participation is for government agencies to authentically participate themselves. People typically experience "government" as an authoritative monolith. Right now, companies and individuals all over the world are engaging in social networks familiar to ordinary citizens and responding to feedback constantly. As that becomes more and more normal to people, a government that still holds dialogue on some special obscure place on the web will become more and more foreign.
We can’t settle for that as our government’s standard practice. Fortunately, by following some of the recommendations we’ve made here, but implementing some new tools that are free or very low-cost, and by focusing on a culture of continual small improvements to public engagement, our government can do far, for better in serving the public than it is doing today.