osewalrus (osewalrus) wrote,
osewalrus
osewalrus

Briefly on News Corp and Google

Two companies this past week raise the interesting question of what it means for a company, or the head of a company, to "know" when the company is behaving badly. And, if we dig deeper, the question of whether we care for purposes of accountability.

First, the British Parliament released its report on whether Rupert Murdoch or James Murdoch knew about the hacking and associated illegal activity to cover it up. They concluded that while they did not have personal knowledge, they had created the environment in which such illegal activity was encouraged and had exercised "willful blindness" to avoid personal knowledge that would have required them to take action.

Second, Google voluntarily released the FCC's report on its "Spy-Fi" scandal. turns out that the trucks driving around collecting info for Google Street View deliberately intercepted unencrypted WiFi transmissions and collected all the info they could. There was at least one email from the engineer designing the software to the people managing the project, prompting the FCC to conclude that Google "knew" it was collecting the information. While this did not, according to the FCC, break any Federal laws under the Communications Act, the Report found Google had impeded the FCC's investigation and failed to cooperate fully. Accordingly, they fined Google $25K. (I haven't read the FCC Report so not sure if I agree with them on the enforcement side, but the FCC is fairly conservative when it comes to enforcing its criminal jurisdiction. Also, while $25K is cheap, Google did ultimately cooperate and statute limits the FCC's ability to impose fines in these cases.)

These cases rather the rather interesting question of what it means for a corporation -- especially a large one -- to "know" that it is committing a crime. As intent is often a key part of criminal (rather than civil) enforcement, this is not an arbitrary question. And in an age of corporate personhood, the notion that corporations have rights of real people but are incapable of committing crimes because they are not actual people is rather unsatisfying. Worse, it eliminates the deterrent effect of criminal law in such cases. But a rule of strict liability seems inherently unsatisfying as well from a policy perspective. For one thing, what would it mean? In the case of Murdoch, where there seems evidence of willful blindness on the part of those who control the company, some sort of criminal liability for the decisionmakers is more justifiable under traditional principles of criminal law (which often treats willfulness as a form of intent). But the Google case stops at a fairly low level of management (and, in any event, does not appear to have violated federal law).

We used to rely on shareholder control, but changes to corporate law have made shareholder control a myth. The legal impediments to a shareholder trying to hold management accountable are fairly high, and the fact that it is possible to have multiple classes of common stock that leave voting control in the hands of a small number of shareholders makes it even more difficult. (For example, the public traded stock of Google conveys, I believe, 1/100th of a vote per share. A small portion of the original founders and investors control a different class of stock, that conveys 99/100th of a vote per share, but a smaller percentage of ownership of the actual company.)

Fun problems for a world in which corporations have become the dominant life form.
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your IP address will be recorded 

  • 4 comments