Quotedfrom email about "Policy change in regards to Social Media use for social learning from Centre for Innovation, Leiden University" by Tanja de Bie, Community Manager (Centre for Innovation, Leiden University via Coursera)
The Centre for Innovation of Leiden University has always strongly supported social or collaborative learning in online learning: the interaction between learners facilitating learners, whether that is in discussion forums, peer review assignments or in our Facebook groups, contributes to a deeper understanding of subjects, and prepares learners to apply their knowledge.
Therefore we have decided to close all Facebook groups, Whatsapp groups and Instagram accounts currently under control of the Centre for Innovation, per the 29th of March 2019, and have adjusted our courses accordingly.
On behalf of Centre for Innovation, Leiden University,
Tanja de Bie, Community Manager
At least part of Leiden University is apparently making the moral and ethical call to close all their Facebook related properties. Kudos! They’ve already got a great website, perhaps they’ll move a bit more toward the IndieWeb?
The philosophy of Hannah Arendt points to the banal evil beneath Facebook's many mistakes.
We definitely need some humanity and morality in our present mess. More and more I really want to rage quit Facebook for what it’s doing to the world, but I would like to have all my friends and family follow me.
TIM: Life begins at conception. Pregnancy is a gift from God, which is why I’m cosponsoring this anti-abortion legislation after asking my lover to have an abortion. I’m 65 and she’s 32, but you probably figured that out already.
Gina Haspel, President Trump’s pick for C.I.A. director, faced the Senate Intelligence Committee for the first time on Wednesday as her confirmation hearings began. Lawmakers addressed her with an unusual line of questioning: What is your moral character?
On today’s episode:
• Matthew Rosenberg joins us from Washington, where he covers intelligence and national security for The New York Times.
We’ve recently seen the head of the F.B.I. be ousted because he ostensibly wouldn’t take a loyalty oath and refused to close an investigation. Could this happen again? Could it be far worse?
They stopped far too short here in opening up questions of harkening back to the Third Reich and Hitler and his government commanding people to commit genocide. We all know there’s a line one can’t cross and use the defense that “I was commanded to by the authorities.”
So the real question is: will Haspel stand up to Trump to prevent moral atrocities which Trump may want to inflict, whether this may extend to areas like torture or, perhaps, far worse?
After a suspected chemical attack in Syria, President Trump said Iran and Russia were responsible for backing “Animal Assad.” But Damascus may view the United States as being focused on a different fight.
President Trump has warned that there will be a “big price to pay” after yet another suspected chemical weapons attack in Syria.
But the suspicion that Syria continues to use those weapons suggests it views the United States as being focused on a different fight.
On today’s episode:
• Ben Hubbard, who covers the Middle East for The New York Times.
Listening to this a few days on it sounds more like Trump has even more bluster than Obama, but he’s doing roughly the same thing. Yet again, small countries that should know far better are continuing to trod on their own people. Sadly, America is doing it to, just with far more sophisticated weapons. If we can’t figure out the right and wrong at the big obvious scale, how can we have proper morality at the smaller and more subtle scales?
I am tired of listening to comments on Facebook, mentioning it’s already too late or no point now or the platform is too valuable. We need to stop this. We are conveying to owners & other parties involved that don’t worry. It doesn’t matter how bad you screw up. You own us.
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."