Content from my Philanthropy 2173 Blogspot feed

PHILANTHROPY 2173

The future of good

FastCompany Magazine "Best Blog"
Huffington Post "Philanthropy Game Changer"

I heard this story last week, from the mother of a toddler.

The kid is home, playing with grandpa. Kid is just past the peek-a-boo stage, now experimenting with hide and go seek. Like most kids of this age, hiding generally involves standing on the other side of a chair or putting a piece of paper on her head. Not really hidden. But Grampa didn't get the message. When it's his time to hide he goes in the other room. Toddler takes hands off eyes, looks around. Doesn't see Grampa. Looks a little bit worried but doesn't move. Waits another minute. Shouts out, "Alexa, where's Grampa?"

I'm going to let you sit with that.

It led us to all kinds of questions. Including about advertising on these devices. Others in the group said it's small, but growing. This article says it's already here and that we (the people) like it.  All of us agreed it seems inevitable.

Question for nonprofits: you ready to pay whatever it will cost to make sure you are the one (and only) response when someone starts asking, "Alexa (others), who should I donate money to?"

Question for the rest of us: You really want some engineered algorithm (no doubt based on who paid the most) telling you where to give your money?

Sigh.



Author: Lucy Bernholz
Posted: June 17, 2019, 10:48 pm
I’ve participated in a lot of conferences, panels, discussions etc. about “nonprofits and AI,” “foundations and AI,” “AI for good”* and so on. The vast majority of them miss the point all together.

It’s not really a question of these organizations using artificial intelligence, which is how every one of these panels approaches it. For most civil society organizations, they may be buying software that’s going to use algorithmic analysis and some AI on a large dataset, perhaps through their vendors of fund development data or software. And then, yes, there are legitimate questions to be asked about the inner workings, the ethical implications, the effects on staff and board and so on. Important questions but hardly worth a conference panel (IMHO) - those are important software vendor considerations, and it is important for all organizations to understand how these things work, but not the “black magic” or “sector transforming phenomenon” that a conference organizer would want you to think.

The REAL issue is how large datasets (with all the legitimate questions raised about bias, consent and purpose) are being interrogated by proprietary algorithms (non-explainable, opaque, discriminatory) to feed decision making in the public and private sectors in ways that FUNDAMENTALLY shift how the people and communities served by nonprofits/philanthropy are being treated.
  • Biased policing algorithms cause harm that nonprofits need to understand, advocate agains, deal with, and mitigate. 
  • AI driven educational programs shift the nature of learning environments and outcomes in ways that nonprofit after-school programs need to understand and (at worst) remediate, (at best) improve upon. 
  • The use of AI driven decision making to provide public benefits leaves people without clear paths of recourse to receive programs for which they qualify (read Virginia Eubanks’s Automating Inequality). 
  • Algorithmically-optimized job placement practices mean job training programs and economic development efforts need to understand how online applications are screened, as much as they help people actually add skills to their applications.
This essay on “The Automated Administrative State” is worth a read.

The real question for nonprofits and foundations is not HOW will they use AI, but how is AI being used within the domains within which they work and how must they respond?



* I try to avoid any conversations that are structured as “_____ for (social) good” and all situations that are “_[blank]_ for social good” where the [blank] is the name of a company or a specific type of technology.
Author: Lucy Bernholz
Posted: April 8, 2019, 3:31 pm
Yep, deliberately trying to provoke you with the headline. Here's what provoked me:

The news that two airplane crashes killed a total of 346 people, in part due to a software upgrade that was "optional." (read: cost more)

This story about electronic health records (software) and deaths that ensued from resultant poor medical care.

What does this have to do with philanthropy and civil society?

Philanthropic and civil society organizations are as dependent on software as are businesses and governments. Do you know how your software works? What its vulnerabilities are?

Your work may not involve the difference between life and death, but if you're collecting information on lots of people and not respecting their rights in collecting it, not protecting it once you have it, or managing it (and the software you use to hold and analyze it) in line with your mission, how much good are you really doing? Are you making the people your organization serves, or the nonprofits you fund, more vulnerable with your data practices even as you try to do good with your dollars?
Author: Lucy Bernholz
Posted: March 23, 2019, 1:43 am
The World Food Programme recently announced a partnership with Palantir. There’s a lot to be concerned about here - in this specific instance and in the broadest possible sense for understanding civil society’s role regarding digital data use. Please read this open letter

https://responsibledata.io/2019/02/08/open-letter-to-wfp-re-palantir-agreement/


Does your organization know how the tech companies you partner with use the data you have on your constituents? It’s not just about “ownership,” but the insights and algorithms and predictive tools that may be built on that data. Are you “locked in” to a platform - if you wanted to switch vendors for your CRM or mailing lists or social media - can your organization get your data back?

How is your organization managing data? With any more respect for the individual people from whom it comes than these big software/analytics/insight companies? If not, why should anyone trust you with their data?

These are pivotal questions - and we need answers and processes and protocols and regulation. Civil society is not meant to be either a poor cousin of big business, an outsource arm of government, or the “data washing” arm of either.
Author: Lucy Bernholz
Posted: February 8, 2019, 7:04 pm
The tenth annual Blueprint - Blueprint 2019 - went live in December. You can find it and the entire decade's archive here.
On January 23rd we'll be hosting our annual "prediction-palooza" (free, online) discussion about philanthropy predictions. Information on that is available here.
In the meantime, I've just come off a conversation with a research group preparing a report for a big foundation on the future of philanthropy. I get asked to do a lot of these. I only agree to these conversations if there is going to be a public version of the report. I'm told that's the case - this report should be available in April.

Some thoughts as I was asked to reflect on the last 5 years and then look ahead 10 years.

Looking back: 
All the new products, platforms and ways to give (DAFs, impact investing, crowdfunding platforms, text donations, cause marketing, etc.) are not adding up to more giving by more people. As Indiana University notes, since 2000 we've lost 20 million givers - at least as recorded by data on charitable donations to tax exempt nonprofits. This is over the same 19 year time frame that brought us online/mobile donations, online data resources about nonprofits,

-> Perhaps we can stop assuming product innovation equals growth?

Where have the "missing millions" gone? I doubt people have given up on giving, after all we've been doing it for thousands of years. I think we have an "instrumentation" problem. Which is to say, we're measuring the wrong things. Changes in the tax code are expected to result in changes in giving to nonprofits (and to changes in how useful itemized tax forms will be for measuring giving behavior).

--> Perhaps we can ask whether we're measuring the right things?
We need new instruments to measure how Americans give and to whom. It should include measurements of person-to-person giving (e.g., as happens on crowdfunding platforms), political donations and contributions of time, investments intended to produce social/environmental outcomes, and money raised via product purchases (BOGO or cause marketing). I've been calling for this since at least 2015 - see here, and had intimations about it back in 2008 (see here).

Looking Ahead:

Commercial platforms and nonprofit data:
Does anyone really think Facebook is going to be the same thing in 2029 that it is in 2019? Not even the folks at FB would say that. Every nonprofit and philanthropist that is managing their giving, their outreach, their donors, etc. on this single platform should beware. The rules are going to change, the compan(ies) will push back, there will be all kinds of ups/downs between now and 10 years from now - but in no imaginable future is doing right by nonprofits (in terms of their data and longterm relationships with donors) in the growth plans of that company. If you can imagine either a different FB in 10 years or no FB 10 years from now, then it seems like a good idea not to put all your data eggs in a FB basket. (Or any commercial platform driven by a business model unrelated to how well it serves the social sector).

Charity and Politics:
The legal line between these two domains is man-made. It's changed over time. It's thin and it's fragile. The current presidential administration is determined to destroy it (see Johnson Amendment fight, as well as court cases on donor disclosure in politics). There's not enough manpower at the IRS to defend any boundary that might exist. Digital data and systems make defending the line even more difficult than they were in the analog age. Many advocates and advocacy organizations would like to see the line gone. Individual people may not care as much about separating political and charitable action as policy makers and nonprofit leaders want them to. Assuming the old boundaries between these domains function as intended is fooling oneself. We should put our attention into writing rules that protect our (often conflicting) rights (association, expression, privacy), sheds light on political corruption and the excessive influence of wealth, and assumes digital data collection and exposure, rather than nostalgically assuming that protecting a legal distinction established in the 1950s is the best (or only) way forward.

Shiny Objects
Anyone but me noticing that the marketing hype about blockchain is starting to quiet down, just as people building digital systems that focus on digital security and encryption are growing in actual use? This is a good thing - stop gawking at the new packaging and let's focus on the values that particular activities require. In some cases, permanent records of transactions are a good thing (supply chain verification of objects, possibly). In other cases, distributed, immutable records may not be such a good idea (human ID, for example).

Artificial intelligence (AI), predictive algorithms, and machine learning are three more shiny objects. Most people ask me "How will nonprofits and/or philanthropists change by using AI?" I think this question has the subject and object in wrong order. The more relevant question for most donors and social sector organizations is "How will the use of AI (etc.) change what the organizations/donors are doing?" Government bodies and commercial companies are already using these tools - they shape what you see online, what benefits you qualify for, your chances of being audited by the tax authority, your chances of getting a speeding ticket, the keywords you need to enter in your job application, etc. etc. They are changing the nature of the problem space in which social sector organizations and philanthropists do their work. This is not the future, this is now. This is not the edge case exceptions of a few organizations wih some good data and data scientists, this is the great mass of organizations and donors. I'd love to see some real discussion of how philanthropy and social sector organizations can and should change to be effective in a world already being shaped by AI (etc.). Then, for dessert, we can talk about the exceptions to this rule.

It's the nature of the game that we'll chatter about the latest shiny object. What's much more interesting is how we embody shared values in new ways.



Author: Lucy Bernholz
Posted: January 2, 2019, 8:25 pm
Like many of you I woke up this morning to an email inbox full of leftover Black Friday ads, a whole bunch of Cyber Monday ads, and the Xth day in a row of #GivingTuesday announcements.

Among those was the first clearly-designed-to-misinform #GivingTuesday astroturf email that I've received.

It came from the Center for Consumer Freedom (CCF) - a nonprofit front group run by a lobbyist for a variety of industries including restaurants, alcohol, and tobacco. The umbrella group for CCF - the Center for Organizational Research and Education (CORE) - is also home to HumaneWatch. According to the 2016 990 tax filing for CORE, HumaneWatch exists to "educate the public about the Humane Society of the United States (HSUS), its misleading fundraising practices, its dismal track record of supporting pet shelters and its support of a radical animal rights agenda."

(clip from 2016 990 for CORE)

The email I received from CCF linked to a YouTube "ad." But all of it - the website consumer freedom, the email I received, the work of these nonprofits - all lead back to a commercial PR firm Berman and Co, which has been accused of setting up these groups as part of their paid work for industry. None of this was revealed in the email - and if you look at the website for CCF to find out who funds it you find this statement:
"The Center for Consumer Freedom is supported by restaurants, food companies and thousands of individual consumers. From farm to fork, from urban to rural, our friends and supporters include businesses, their employees, and their customers. The Center is a nonprofit 501(c)(3) organization. We file regular statements with the Internal Revenue Service, which are open to public inspection. Many of the companies and individuals who support the Center financially have indicated that they want anonymity as contributors. They are reasonably apprehensive about privacy and safety in light of the violence and other forms of aggression some activists have adopted as a “game plan” to impose their views, so we respect their wishes."
If you check the CCF's 990 form (Search under CORE) you'll find that on revenue of $4.5 million (sources undisclosed), the largest expense was $1.5 million paid to Berman and Co, for management fees. Next largest expense is $1.4 million spent on advertising and promotion.

There's no virtue in this circle - just paid lobbyists setting up nonprofit groups to counter the messages of other nonprofit groups. On the one hand, the nonprofit sector must be doing something right when the tobacco and alcoholic beverage industries are trying to shut them up. On the other hand, good luck to you - average donor - trying to figure out what's real and what's not. Even the watchdog groups are sniping at each other

I've written before about misinformation, the current ecosystem of distrust, and civil society. And here it is. Be careful out there.
Author: Lucy Bernholz
Posted: November 26, 2018, 6:15 pm
Three tweets from yesterday:




Depending on a commercial company for our giving infrastructure is problematic in several ways. First, at any point in time the company (and this company has done this repeatedly) can change it's commitment, algorithm, priorities and leave everyone who was using it without recourse. Second, we have no way of knowing that the company's algorithms are offering all the choices to all the people. How would you even know if your nonprofit or fundraising campaign wasn't being shown to those you were trying to reach? Third, Facebook owns this data and can tell us whatever they want about it. Maybe $1 billion was given, maybe it was more, maybe it was less - how would we know?

There's an existing infrastructure for measuring giving in the U.S. and a number of research centers that analyze and report on those trends every year. That infrastructure - from 990 tax forms to The Foundation Center, Guidestar, the Urban Institute, and independent research from Giving Institute or the Lilly School at Indiana U - was built for the purpose of public accountability, to protect the democratic values of free association and expression, and for industry-wide performance improvement. This infrastructure is not perfect. But the data they use and their analytic methods can be checked by others - they can be replicated and verified following the basic tenets of sound scientific practice and good evidence practices for policymaking.

 There needs to be new ways to understand what's happening on these proprietary platforms - especially if Facebook is moving $1 billion and GoFundMe $5 billion. Those are big numbers about our nonprofit sector. We need to be able to interpret these data, not just reflexively believe what the companies announce.
Author: Lucy Bernholz
Posted: November 17, 2018, 6:51 pm
I've had countless conversations with well-intended people from a number of social sectors and academic disciplines who are working on digital innovations that they firmly believe can be used to address shared social challenges. Some of these approaches - such as ways to use aggregated public data - are big investments in unproven hypotheses, namely that making use of this data resources will improve public service delivery.

When I ask these folks for evidence to support their hypothesis, they look at me funny. I get it, their underlying hypothesis that better use of information will lead to better outcomes seems so straightforward, why would anyone ask for evidence? In fact, this assumption is so widespread we're not only not questioning it, we're ignoring countervailing evidence.

Because there is plenty of evidence that algorithmically-driven policies and enterprise innovations are exacerbating social harms such as discrimination and inequity.We are surrounded by evidence of the social harms that automated decision making tools exacerbate - from the ways social media outlets are being used to the application of predictive technologies to policing and education. Policy innovators, software coders, data collectors need to assume that any automated tool applied to an already unjust system will exacerbate the injustices, not magically overcome these systemic problems.

We need to flip our assumptions about applying data and digital analysis to social problems. There's no excuse for continuing to act like inserting software into a broken system will fix the system, it's more likely to break it even further.

Rather than assume algorithms will produce better outcomes and hope they don't accelerate discrimination we should assume they will be discriminatory and inequitable UNLESS designed specifically to redress these issues. This means different software code, different data sets, and simultaneous attention to structures for redress, remediation, and revision. Then, and only then, should we implement and evaluate whether the algorithmic approach can help improve whatever service area they're designed for (housing costs, educational outcomes, environmental justice, transportation access, etc.)

In other words, every innovation for public (all?) services should be designed for the real world - which is one in which power dynamics, prejudices, and inequities are part of the system into which the algorithms will be introduced. This assumption should inform how the software itself is written (with measures in place to check for and remediate biases and amplification of them) as well as the structural guardrails surrounding the data and software. By this I mean implementing new organizational processes to monitor the discriminatory and harmful ways the software is working and the implementing systems for revision, remediation and redress. If these social and organizational can't be built, then the technological innovation shouldn't be used - if it exacerbates inequity, it's not a social improvement.

Better design of our software for social problems involves factoring in the existing systemic and structural biases and directly seeking to redress them, rather than assuming that an analytic toolset on its own will produce more just outcomes. There is no "clean room" for social innovation - it takes place in the inequitable, unfair, discriminatory world of real people. No algorithm, machine learning application, or policy innovation on its own will counter that system and its past time to keep pretending they will. It's time to stop being sorry for or surprised by the ways our digital data-driven tools aren't improving social challenges, and start designing them in such a way that they stand a chance.
Author: Lucy Bernholz
Posted: November 16, 2018, 5:23 pm
You know the old trope about how people looking on the ground around a streetlamp for their lost keys, even if they lost them down the block, simply because "that's where the light is?"
 (http://creepypasta.wikia.com/wiki/The_Man_Under_the_Street_Light)

This is a little like how I've been thinking about generosity. We associate generosity - especially in the U.S. - with charitable giving to nonprofits. Everything else - volunteering time, giving to politics, direct gifts to neighbors or friends or others, mutual aid, remittances, shopping your values, investing your values - those things are all something else.

And, yes, the motivational and behavioral mix for these actions may be different. But we make a mistake when we center the one - charitable giving - and shift everything else to the edge and think that's based in human behavior. It's actually based in politics and industry.

In the U.S we've built an infrastructure of organizations (nonprofits) that take up a lot of space in the generosity mix. And we make them register with the government which allows us to count them. And we require them to report certain actions which then allows us to  track giving to them. Those decisions were political - and have to do with values like accountability and association and expression.

On top of those registries and reports we've built big systems and organizations to make sense of the information. Some of those institutions (Foundation Center) were built as an industry response to possible regulation. Some of those institutions (Guidestar) were built because there was a huge data set of nonprofit tax forms that existed by the 1990s. These data sets served as the "lights" that helped us "see" specific behaviors. It wasn't that other behaviors weren't happening, it's just that there weren't lights shining on them.

(https://www.videoblocks.com/video/seamless-looping-animation-of-classic-wooden-house-with-lights-in-a-beautiful-wintry-landscape-at-night-4dty2hnoxijrh8vsk)

Shining a light on these behaviors was done to better understand this one type of generous act - it wasn't done with the intention of judging the others as lesser. But over time, all the light has focused on charitable giving to nonprofits making it seem like the other behaviors weren't happening or were less important, just because the light was not shining there.

The more the full mix of behaviors happens on digital platforms, the more lights get turned on. Where it is hard to track a gift of cash to a neighbor in need, crowdfunding platforms that facilitate such exchanges (and credit card trails) bring light onto those actions. And because more and more acts take place on digital platforms - Facebook claims to have moved $1 Billion in last year - we can now see them better. The digital trails are like shining new lights on old behaviors.

Think of it like a house of generosity. In one room are donations to charitable nonprofits. In the USA, the lights have been burning bright in this room for decades. In another room is contributions to houses of worship. Down the hall is the room of money to neighbors/friends in need. Another room is where shopping for some products and not others happens. Downstairs is investing in line with your values. There's a room for political funding and and one for spending time rallying around a cause. Other rooms hold remittances or cooperative funds or mutual aid pools. As each of these behaviors shifts to use digital platforms - be it online portals, social media, texting, or even just credit card payments - its like turning on the light in those rooms. We can "see" the behaviors better, not because they're new but because the digital trails they create are now visible - the light is shining in all those
rooms.
(https://www.123rf.com/photo_27996382_big-modern-house-with-bright-lights-on-in-windows-on-a-beautitul-summer-evening.html)

Digital trails shine lights on lots of different behaviors. We can see things we coudn't see before. It's going to be increasingly important that we have public access to data on what's going in the whole house, not just certain rooms. Right now, the data on many of these behaviors is held in closed fashion by the platforms on which the transactions happen - crowdfunding platforms know what happens on them, Facebook tells us what happens there, and so on. We're dependent on the holder of the light to shine it into certain rooms. This isn't in in the public's interest. Having the lights turned on is better than being in the dark, but having public access to the light switches is what really matters.

Author: Lucy Bernholz
Posted: November 14, 2018, 8:22 pm
I've been talking to a lot of nonprofit and foundation folks + software developers lately. The good news is these two communities are starting to work together - from the beginning. But there is a long way to go. Just because you're working in or with a nonprofit/social sector/civil society organization doesn't mean unleashing the most sophisticated software/data analytic techniques is a good thing. In fact, using cutting edge algorithmic or analytic techniques that haven't been tried before in an effort to help already vulnerable people is quite possibly a really bad idea.

I've come to believe that the first question that these teams of well meaning people should ask about whatever it is they're about to build is:
"How will this thing be used against its intended purpose?"
How will it be broken, hacked, manipulated, used to derail the good intention it was designed for? If the software is being designed to keep some people safe, how will those trying to do harm use it? If it's intended to protect privacy, how will it be used to expose or train attention in another dangerous way?

Think about it this way - every vulnerable community is vulnerable because some other set of communities and structures is making them that way. Your software probably doesn't (can't) address those oppressive or exploitative actors motives or resources. So when you deploy it it will be used in the continuing context of intentional or secondary harms.

If you can't figure out the ecosystem of safety belts and air bags, traffic rules, insurance companies, drivers' education, and regulatory systems that need to help make sure that whatever you build does more help than harm, ask yourself - are we ready for this? Because things will go wrong. And the best tool in the wrong hands makes things worse, not better.
Author: Lucy Bernholz
Posted: October 21, 2018, 5:02 pm
A lot of work on responsible data practices in nonprofits has focused on staff skills to manage digital resources. This is great. Progress is being made.

Digital resources (data and infrastructure) are core parts of organizational capacity. We need to help board members understand and govern these resources in line with mission and in safe, ethical and responsible ways.

Digital data and infrastructure need to become part of the regular purview of boards in thinking about liabilities and line items.
  • Ongoing budgeting for staff (and board) training on responsible data governance 
  • Making sure practices are in place - and insurance purchased when practices fail - to protect the people the organization serves when something goes wrong 
  • Understanding the security and privacy implications of communicating digitally with volunteer board members
  • Horizon scanning on ethical digital practice and opportunities
Digital data governance is as much a part of running an effective organization as are financial controls and good human resource practices. We need to help board members lead.
Author: Lucy Bernholz
Posted: October 12, 2018, 6:13 pm

https://www.eff.org/wp/clicks-bind-ways-users-agree-online-terms-service

No one reads the Terms of Service. Few of us understand who has access to the data we generate all day every day. Rachel Maddow and others continue to refer to Cambridge Analytica/Facebook as the former "stealing" data from the latter, when actually, the latter's business model depended on the former doing what it did.

Our (us as people and civil society) relationship with the companies that make our phones, sell us internet access and data plans, "give" us apps, social media feeds and "free" cloud storage is a mess. Part of it the problem is the metaphors. So here's a new one. Don't think of the software, internet, cloud, app, hardware companies whose products you use as vendors, think of them as landlords.

Then think about how you read your lease. How you ask for better terms and negotiate for buildouts or rebates. And how, if they told you they'd be coming in and rummaging around in your file cabinets at any time of day or night, taking what they wanted, claiming it as their own, using it to sell to other renters, and even selling it - you'd run.

People are beginning to recognize the creepy landlord relationship they have with their tech vendors. Nonprofit organizations and foundations who depend on Facebook and/or its APIs, Salesforce and its Philanthropy Cloud, Google docs or hangouts - they're your landlord. You're running your programs and operations in their space. By their rules. You wouldn't stand for it in physical space - why do so in digital space?
Author: Lucy Bernholz
Posted: October 3, 2018, 8:25 pm
 
 (Image from Charlie Chaplin, The Circus)


I've been saying - explicitly since at least 2013 - that the digital policy environment and the digital ecosystem are civil society's domains.


This article on why the Russians might hack the boy scouts spells out the same point fairly clearly.

Civil society exists in and depends on digital data, gadgets, and infrastructure. We rely on the norms and policies that shape the digital environment, are able to use digital tools to advance our goals, and are subject to the manipulation and sabotage of digital spaces.

The "Russians hacking boy scouts" is a great headline to make the point. Anyone with a desire to manipulate opinions - which includes advertisers, hackers, politicians, extremists, ideologues and all kinds of others - knows that our digital dependencies make it easier than ever to do so through supposedly trustworthy institutions, like nonprofits and "news" sites. In a time of information warfare everyone and every institution operating in the digital space is potentially on the battlefield - intentionally or unwittingly.

Practical, immediate meaning of this for every nonprofit? Your digital presence - website, communication on social media, outreach emails, everything - exists in an information ecosystem that is being deliberately polluted with misinformation all day, every day, on every issue. If your communications strategy still assumes that "hey, they'll trust us - we're a nonprofit" or "hey, this is what the data say" then I recommend you reconsider both what you say, how you say it, how you protect what you say, and your expectations and responses to how what you say gets heard and gets used.

You may well be speaking truth. However, the digital "room" you are speaking it into is one filled with deliberate, diffuse distractions and detractors (at the very least). It's like trying to show someone a clear picture in a room full of fun house mirrors. It's time civil society started "assuming" (as in taking as a starting point) that the digital environment in which it exists is one of distortion and distrust and start building effective, trusted, and meaningful strategies from there (instead of being surprised each time things go wrong).

Author: Lucy Bernholz
Posted: September 13, 2018, 5:07 pm
Loch Ness Monster
Nonprofit Advocacy News, a regular newsletter from the National Council of Nonprofits, is a great source of information.

The issue I received in early August had this little nugget:
"Flexible Giving Accounts: A new bipartisan bill (H.R. 6616) has been introduced to allow employers to create flexible giving accounts, enabling employees to make pre-tax payroll deductions of up to $5,000 per year into an account through their employer and designate the nonprofits to receive the funds. Employers would be able to establish and administer the accounts as part of a cafeteria plan as a fringe benefit to attract and retain employees. While seen as promoting charitable giving, the legislation raises questions about whether employee confidentiality can be protected, whether employee giving options would be expanded or limited by employer preferences, and whether administrative fees will eat into the donations along the lines seen this year in the Combined Federal Campaign."
Hmmm. What's this all about? And who's behind it? And who stands to gain? I haven't had time to do much digging but it's on my radar. 

Here's the Bill. Here's some PR on it from Representative Paulsen of Minnesota who introduced it. Here's an interview with Dan Rashke who's foundation supports a 501 c 6 organization, The Greater Give, that is promoting the bill. Rashke is the CEO of TASC, a company that sells software to companies to manage their workplace giving campaigns.

The idea (a I understand it so far) - let workers at companies (that participate in the Combined Federal Campaign? United Way? Any workplace giving?) designate up to $5,000 in pre-tax funds to campaign-selected nonprofits. The employee contributes to their account over time, and the funds are then paid out on a pre-determined schedule.

The Bill's supporters claim the goal is to increase participation in workplace giving campaigns and counter the potential decrease in giving predicted as a result of the Tax Reform Act's changes tot he standardized deduction.

Who "manages" the money while it's sitting in this "account?" (I put account in quotes because as Rashke says in the interview linked above "The first thing to understand is that the Flexible Giving Account is not a physical account. In fact, in drafting the legislation, we wanted to take advantage of existing processes and infrastructures for giving." Are these "accounts" employee designed funds that will be managed by their employees? Who reaps any interest/investing income on those dollars? Who earns (who pays) any fees to manage these funds during the year?

I don't know the answers to these questions but here's my concern - Are FGA's the a new donor advised funds (DAFs)? If so, will it become a "monster" like DAFs have become? There seems to be ample opportunity in the idea as I see it for two beneficiaries - vendors of workplace giving software (like TASC) and money managers. The promised benefit to "everyday philanthropists" (their language, not mine) and communities is....that more people will participate through workplace giving campaigns because of the pre-tax deduction built into the FGA. That's a long-term aspiration built on assumptions about tax incentives and giving, promised on the back of short-term money making opportunities to existing software vendors and (maybe?) money managers.

Your thoughts?
Author: Lucy Bernholz
Posted: August 30, 2018, 3:52 pm
Like people in general, civil society organizations are easily distracted by shiny objects. Although I pinged blockchain as a buzzword a few years ago, the frenzied hype of last year's bitcoin boom and bust (probably manipulated) finally brought the idea through to general public.

There probably are good social sector uses of systems that permanently store information and make transactions verifiable by distributed participants. And there's lots of experimentation going on. My rule of thumb? Experiments with commodities on supply chains strike me as a safer place to start. Experiments with information about people strike me as disasters in the making.

Here's why:
  • The ethics of permanently storing information about people are treacherous, 
  • The legal frameworks for information about people are dynamic and diverse, 
  • The governance choices that shape different blockchains are poorly understood, and 
  • The technology itself? C'mon. Are we gonna fall for this again? "Trust me, this software is secure and safe." First of all, that's what the blockchain is - software + rules made by people.
When it comes to the blockchain there is no "one thing," there are many, and they operate under all kinds of rules. Blockchain = software + rules made by people. If you don't know how the software works, what it does and doesn't do, and if you don't understand the governing rules, don't go using the stuff on humans (my rule of thumb).  Blockchains promote encryption and permanence. It's scary how often people think they must be the right solution when the problem they're facing is bad database design, missing data, broken incentives, greedy partners, or oppressive governments.

We do need to be talking with experts from all sectors about if and when and how to use new types of software and governance structures. Software experts can explain what different systems can and can not do. Sector experts know the human and political problem and whether the challenge is one of security, verifiability, corruption, access or any permutation of these and other conditions. Governance experts know how rules get made, dodged, and broken and can advise on designing just systems that can be held accountable. The people who's data might be collected will know their concerns, now and in the next generations (remember, permanence means permanence).

Here are some useful resources from those doing good work on these questions:

Beeck Center, Georgetown University: The Blockchain Ethical Design Framework
Stanford Graduate School of Business: Blockchain for Social Impact: Beyond the Hype

And a funny, insightful video on digital economics, as told through the experience of cryptokitties (and the blockchain).
Author: Lucy Bernholz
Posted: August 21, 2018, 4:30 pm
It's August 16, 2018 (Forty seven years to the day since President Nixon's administration admitted to keeping an "enemies list" of the American people).

Today, newspapers across the United States are running editorials on the importance of a free press and declaring their outrage at the way the current U.S. President treats and talks about the "media."

I've written before about how the last two decades in the "news business" might hold insights to the current and short-term future of civil society writ large. My previous comments have focused on the effects of the transformation to digital distribution (broke print journalism's ad revenue model), regulatory changes (which facilitated the creation of "news" monopolies in print and broadcast), and the entry of new players with different credentials (blurred the understanding of independent, credible news and opinions/propaganda).

By Robert Seymour - A Short History of the National Institutes of Health National Library of Medicine photographic archive.Cholera "Tramples the victors & the vanquished both." Robert Seymour. U.S. National Library of Medicine, Public Domain, Link

It's time to add to this list, and learn from the collective voices being published today by more than 300 papers across the country. The list of changes and threats to journalism now, most notably, include direct attacks from the White House itself. But we also need to pay attention to the "successful" efforts over the last
years to sow doubt and confusion in the information environment, so that even the most careful, vetted, confirmed reporting now exists in a miasma of distrust and deliberate doubt.

Both of these changes - direct threats from the federal government and an environment of mistrust,  distrust, and lies - should be added to the list of realities that face civil society in the United States. (May apply elsewhere as well, but I'm thinking U.S. at the moment)

Direct threats include: The IRS's regulatory change to not require donor disclosure for politicking nonprofits, repeated efforts to repeal Johnson Amendment.

Environment of mistrust include: Oh, come on. You've been paying attention - distrust of the news media is part and parcel of a corrosion of online communications. Once doubt takes over, it takes over everything. We are all communicating into and via an atmosphere where doubt rules. No association or organization should be assuming that their communications won't be manipulated or labeled false by opponents or political by platforms, that facts alone win, or even that their allies are who they say they are.
Author: Lucy Bernholz
Posted: August 16, 2018, 7:26 pm
I've been thinking about the role that  volunteers and nonprofits play in providing curatorial and editorial support to the internet ever since 2014 when I learned that Twitter was going to "partner" with several women's organizations following the murders in Santa Barbara, California that were carried out by a man in a self declared "war on women."

Facebook's dependence on NGOs to combat hate speech in Myanmar brought this up again. Especially when the NGOs told Zuckerberg, "No, you didn't live up to your end of the bargain."

And then I heard Tarleton Gillespie speak about his new book, Custodians of the Internet, which put to rest any naive fallacies I once held about these companies not actively curating their platforms.

There was also  YouTube's announcement earlier this year that it would rely on Wikipedia entries to help it deal with conspiracy theories. The company didn't even bother to tell the nonprofit in advance (let alone try to consult with the nonprofit as if it might have a say about this plan). This hasn't worked out that well for either YouTube or Wikipedia.  Let's think about this. Wikipedia is run by a nonprofit but the work is done by a global network of volunteers, who - everyone knows - are by no means representative of the global population. YouTube is part of Alphabet, one of the world's wealthiest companies, and is itself one of the world's biggest social networks. It has it's own curatorial teams. And yet, as Wired notes, both Facebook and YouTube are outsourcing their responsibilities to nonprofits.

This seems unseemly even if you just think about it from an economic standpoint - big company relying on unpaid labor? Sounds like exploitation. When you start thinking about it in terms of the health of nonprofits or civil society the exploitation seems even worse.

Just like the open source community has built all kinds of technology that companies rely on, so too are nonprofits providing a kind of critical digital infrastructure in terms of their community voice, commitment to a set of ideals, expertise, and concerns for the vulnerable. Yet the current set of "partnership" arrangements seem destined to throw the nonprofit under the bus - the company saves money, gains reputation, and offloads both costs and liability. The nonprofit gets...what?

Author: Lucy Bernholz
Posted: June 11, 2018, 7:33 pm
I posted this reflection over on DigitalImpact.org - regular readers of the Blueprint - send me your notes!

For those who don't want to click over (and you should) the piece discusses the technological work being done on digital identities - where you would control yours - and its implications for civil society and philanthropy. Go on, read it.
Author: Lucy Bernholz
Posted: April 17, 2018, 3:56 pm
One of many things that have been made more public during this week's congressional hearings with Mark Zuckerberg is the way in which the platform curates content. Zuckerberg bemoaned the reality that it's his job to decide who sees what when.
For those who study curation and platforms and internet law this is not new. I'm writing this while listening to Tarleton Gillespie discuss his forthcoming book (recommended) Custodians of the Internet. He's describing the rules, technologies, and people that make up the "moderation apparatus" - the systems that determine who sees what information, when, and from whom. Gillespies argues that this moderation is essential to what the platforms do - it is their value proposition. This runs counter to the longstanding mythos of the open web.

One of the elements of this "moderation apparatus" that Gillespie describes that catches my eye is the role of civil society organizations and nonprofits. Big companies, like Facebook but probably not only Facebook, rely on civil society to do their dirty work. 

In Myanmar, civil society groups that were working with Facebook to take down hateful and violent postings pushed back when Zuckerberg claimed that the company was doing all it could to address these issues. The civil society groups noted that the company was essentially relying on them to voluntarily moderate the site and wasn't providing them with the engineering resources that were needed to do this. They secured a verbal commitment from Zuckerberg to improve the process.

Here's what this means:
  • Facebook was shifting its responsibilities to civil society.
  • Civil society groups aren't equipped for, or paid for, this role. 
  • Civil society groups - by design - are fragmented and contentious. Choosing some of them to do moderation is a value-laden, editorial decision.  
  • Civil society is - from Facebook's perspective in this example - just a low cost, outsourced labor source.  It also, no doubt, shifts liability from Facebook to civil society (not least for the human psychological effects of moderating photos and posts about harm and violence).
Here's what I want to know:
  • How widespread are these kinds of commercial/civil society moderation/curation relationships?
  • How do they work - who's contracted for what? who's liable for what? what recourse exists when things go wrong?
  • What do civil society groups think of this? When might it be a good solution, from civil society's perspective?
  • Some civil society groups - such as Muslim Advocates and Color Of Change - are calling for a civil rights audit of Facebook. Senator Cory Booker took this idea into the hearings. This sort of advocacy and accountability demands of the platforms makes more sense to me as the role of civil society - not doing the work, but demanding the work be done. Your thoughts?
Seems to me this starts to elicit some really interesting questions about role/relationship of nonprofits, companies and government in digital space.


Author: Lucy Bernholz
Posted: April 12, 2018, 10:25 pm
This article from India Development Review captures some of my thoughts on civil society and digital data.

http://idronline.org/civil-society-and-the-burden-of-data/


Author: Lucy Bernholz
Posted: April 11, 2018, 8:08 pm
Remember when philanthropy, foundations, and nonprofits were unknown? Boy, has that changed - they now play regular roles in news and literature.
  • Senator Patrick Leahy asked Mark Zuckerberg why Facebook had to hear from civil society groups before taking action against violent crimes in Myanmar
    • (editor: Why didn't Leahy also ask Zuckerberg about Facebook's labor exploitation of those groups' volunteers - essentially relying on them as his workforce?)
  • Special Counsel Robert Mueller and the FBI are investigating the President's attorney for foreign payments to Trump's foundation.
  • Meg Wolitzer's new novel features a protagonist who works at a foundation. A review of the novel in Bookforum includes this wonderful line:
    • "...it takes an earnest but compromised nonprofit endeavor as a vehicle for its ideas. With its magical relationship to money, the foundation helps insulate Greer and her beliefs from the world beyond, at least until she must confront the reality of what the suits are doing upstairs"
  • Jonathan Franzen's 2010 novel, Freedom, featured a bird rescue nonprofit. 
I guess not all press is good press. 
Author: Lucy Bernholz
Posted: April 10, 2018, 8:53 pm
(originally posted on DigitalImpact.org)

Have you noticed an uptick of emails from companies like Slack, Google, or PayPal, announcing new privacy policies and terms and conditions? Why the sudden onslaught of updates? The answer is easy. The companies sending these notices are changing their policies to meet the requirements of the European Union’s General Data Protection Regulation (EU GDPR or just GDPR), which will put powerful new enforcement mechanisms into place, starting on May 25, 2018.

If you’re a U.S. resident, or working at a U.S. nonprofit or foundation you may wonder what, if anything, the GDPR has to do with you? Good question. There’s no simple answer for everyone outside the EU. But just as those companies (all of which are based in the U.S.) revisit their policies and practices because of the new law, it’s a good idea for you to do so, too.

First, the GDPR probably applies to you, whether you know it or not. It’s possible – depending on where your clients and donors live, where your data is stored, or where you provide services – that your organization is subject to fines for not following the new law. In this case, compliance is more than just a good idea, it’s required.

Second, the GDPR is a prompt for a worldwide checkup on safe, ethical, and effective data practices. Many of the GDPR’s provisions align with the data governance principles and responsible data practices that we at Digital Impact advocate for in civil society. Think of the GDPR as providing a framework and set of user-centered guidelines about data that may just align with your mission.
Many resources and consultancies are popping up to help organizations comply with the GDPR.

Digital Impact is here to help you navigate through it. We’re on the lookout for credible, accessible, and affordable resources with particular resonance to nonprofits, foundations, and civil society. In the coming months with help from our community, we’ll be curating new content, holding conversations about data governance and GDPR, and fostering discussion at digitalimpact.org/gdpr.

Check out our starting list of GDPR resources, send us others that you’ve found, and join the community in conversation. Want to share your view on the GDPR with the world? Become a Digital Impact contributor. And if there are topics, tools, or templates you need but can’t find, let us know. Maybe the Digital Impact community can help.
Author: Lucy Bernholz
Posted: April 7, 2018, 11:00 am
Gun violence survivors. The extent and reach of this as part of the identity of millions of people in the  U.S. was on heart-wrenching full display on Saturday, March 24. Thousands of people have survived the USA's totemic mass shootings (Columbine, Aurora, Charleston, Virginia Tech, Pulse, Newtown, Las Vegas, Parkland, there are too many to list). Hundreds of thousands, probably millions of Black people and others in poor, urban communities survive daily gun violence, perpetrated by both civilians and law enforcement. People in these communities have been naming the problem, identifying as survivors, and calling out the epidemic for decades. They survive despite the pain, grief, and identity-shaping nature of the experiences. Yesterday, people from across many different communities - bound by a shared identity that none of them chose - took full hold of the attention of the rest of the country, the media, the world.

Part of the NRA's success for so many years has been that gun owners identify as gun owners. It's not just something they care about, it's part of how they see themselves. This is the argument made by Hahrie Han and other scholars. When an issue becomes part of your identity, you act on it - you participate in civic life, you vote, you hold politicians accountable, you show up.

Yesterday was a full scale display of how broad and big is the group that shares this unwanted identity. The breadth and depth and multi-generational nature of people who understand themselves as gun violence survivors.

Now that we've finally heard it and seen how many we are, perhaps this shared identity will contribute to civic action of a scale and persistence to match.



Author: Lucy Bernholz
Posted: March 25, 2018, 6:35 pm

 

Thanks to The Engine Room and the Ford Foundation - this report clearly shows how the digital ecosystem is now core to civil society, the expertise needed, and the emerging infrastructure to support digital civil society. Read it now.
Author: Lucy Bernholz
Posted: March 13, 2018, 4:17 pm


This post is an excerpt from Philanthropy and Digital Civil Society: Blueprint 2018, my ninth annual industry forecast. Read the entire Blueprint series and join the conversation on social media with #blueprint2018.




The logic, theory, and experiences that connect an open civil society with a stable majority-run democracy are well known. Civil society is meant to be a third space where we voluntarily come together to take action as private citizens for the public good. Majority-run democracies need to, at the very least, prevent those who disagree with them (minorities) from revolting against the system. Civil society provides, at the very least, the pressure-release valve for majority-run governments. Positioned more positively, civil society is where those without power or critical mass can build both and influence the majority. It serves as a conduit to the majority system and a counterbalance to extreme positions. It also serves as an outlet for those actions, rights, and views that may never be the priority of a majority, but that are still valid, just, or beautiful. When it exists, civil society offers an immune system for democracy—it is a critical factor in a healthy system, and it requires its own maintenance. Immune systems exist to protect and define—they are lines of defense that “allow organism[s] to persist over time.”

Civil society always struggles to define its independence from governments and markets. Civil society is shaped by laws and revenue streams, but has different accountability mechanisms and relies on voluntary participation. It is distinct from compulsory government rights and obligations, and can often operate in ways that aren’t about financial profit. But to describe the resulting space as truly independent is aspirational at best. While universal human rights such as free expression, peaceable assembly, and privacy provide its moral and philosophical underpinnings, civil society is shaped by the laws of the country in question. These include regulations about allowable sources of financing, public reporting, governance structures, and defined spheres of activity. At the very least, the boundaries of civil society in modern democracies are set by government action.

We are surrounded by big, fragile institutions. Global companies, established political structures, and big nonprofits have purchased, suppressed, or ignored the fluid and small alternatives surrounding them. Fluid, networked alternatives exist and will continue to spawn. For some time now, the fate of these alternatives was absorption by the top or diffusion with limited impact. In each sector, there appears to be a notable change of attitude in the way the small views the big. While corporate near-monopolies and dominant political parties are still viewed by some as the natural and best order of things (see, for example, tech executives and incumbent politicians), the big players in each sector are rigidifying. I sense that this is matched by a new attitude from the emergent, smaller, and more fluid groups who aspire to challenge rather than to buttress.

This is where reminding ourselves of the dynamism of a social economy within civil society is so important. It helps us to keep our eyes simultaneously on emerging forms and on the relationships between them (the nodes and the networks). It’s where we see tech-driven alternatives to party politics, nonprofit or research-driven alternatives to corporate data monopolies, and the crowdfunding of public services. What’s changed is not the level of dynamism among these small, fluid, and cross-sector strategies. What’s new is the confrontational nature they now bring. These alternatives don’t see themselves as mere fleas on an elephant; rather, they challenge themselves to be the termites that topple the houses.

The sense of failed systems can be seen in the rise of autocrats where democracy once ruled, in the lived experience of a changed climate even as a few powerful holdouts cling to their self-interested denials, and in the return to prominence of racist or nationalist factions where they’d been marginalized before. Threats about nuclear warheads catch people’s attention. There is a pervasive sense of uncertainty.

Democracies depend on civil society. Closing civil society often precedes a democracy’s shift into autocracy or chaos. Defending civil society is not just an act of self-preservation. Protecting the rights and interests of minority groups, and allowing space for collective action and diverse beliefs, a cacophony of independent voices, and activities that yield neither financial profit nor direct political power, are in the best interest of elected political leaders and businesspeople.

Author: Lucy Bernholz
Posted: January 31, 2018, 4:00 pm