👓 Artificial Intelligence suddenly got a whole lot more interesting | Ilyas Khan via Pulse | LinkedIn

Read Artificial Intelligence suddenly got a whole lot more interesting by Ilyas Kahn

Just over a year ago a senior Google engineer (Greg Corrado) explained why quantum computers, in the opinion of his research team did not lend themselves to Deep Learning techniques such as convolutional neural networks or even recurrent neural networks.

As a matter of fact, Corrado’s comments were specifically based on Google’s experience with the D-Wave machine, but as happens so often in the fast evolving Quantum Computing industry, the nuance that the then architecture and capacity of D-Wave’s quantum annealing methodology did not (and still does not) lend itself to Deep Learning or Deep Learning Neural Network (“DNN”) techniques was quickly lost in the headline. The most quoted part of Corrado’s comments became a sentence that further reinforced the view that Corrado (and thus Google) were negative about Deep Learning and Quantum Computing per-se and quickly became conflated to be true of all quantum machines and not just D-Wave :

“The number of parameters a quantum computer can hold, and the number of operations it can hold, are very small” (full article here).

The headline for the article that contained the above quote was “Quantum Computers aren’t perfect for Deep Learning“, that simply serves to highlight the less than accurate inference, and I have now lost count of the number of times that someone has misquoted Corrado or attributed his quote to Google’s subsidiary Deep Mind, as another way of pointing out limitations in quantum computing when it comes either to Machine Learning (“ML”) more broadly or Deep Learning more specifically.

Ironically, just a few months earlier than Corrado’s talk, a paper written by a trio of Microsoft researchers led by the formidable Nathan Wiebe (the paper was co-authored by his colleagues Ashish Kapoor and Krysta Svore) that represented a major dive into quantum algorithms for deep learning that would be advantageous over classical deep learning algorithms was quietly published on arXiv. The paper got a great deal less publicity than Corrado’s comments, and in fact even as I write this article more than 18 months after the paper’s v2 publication date, it has only been cited a handful of times (copy of most recent updated paper here)

Before we move on, let me deal with one obvious inconsistency between Corrado’s comments and the Wiebe/Kapoor/Svore (“WKS”) paper and acknowledge that we are not comparing “apples with apples”. Corrado was speaking specifically about the actual application of Deep Learning in the context of a real machine – the D-Wave machine, whilst WKS are theoretical quantum information scientists and their “efficient” algorithms need a machine before they can be applied. However, that is also my main point in the article. Corrado was speaking only about D-Wave, and Corrado is in fact a member of the Quantum Artificial Intelligence team, so it would be a major contradiction if Corrado (or Google more broadly) felt that Quantum Computing and AI were incompatible !

I am not here speaking only about the semantics of the name of Corrado’s team. The current home page, as of Nov 27th 2016, for Google’s Quantum AI Unit (based out in Venice Beach, LA) has the following statement (link to the full page here):

“Quantum Computing merges two great scientific revolutions of the 20th century: computer science and quantum physics. Quantum physics is the theoretical basis of the transistor, the laser, and other technologies which enabled the computing revolution. But on the algorithmic level today’s computing machinery still operates on “classical” Boolean logic. Quantum computing is the design of hardware and software that replaces Boolean logic by quantum law at the algorithmic level. For certain computations such as optimization, sampling, search or quantum simulation this promises dramatic speedups. Soon we hope to falsify the strong Church-Turing thesis: we will perform computations which current computers cannot replicate. We are particularly interested in applying quantum computing to artificial intelligence and machine learning. This is because many tasks in these areas rely on solving hard optimization problems or performing efficient sampling”

There is a lot to digest in that quote – including the tantalising statement about the strong “Church-Turing Thesis” (“CTT”). Coincidentally this is a very rich area of debate and research that if even trivially followed in this article would take up far more space than is available. For those interested in the foundational aspects of CTT you could do worse than invest a little time listening to the incomparable Scott Aaronson who spoke over summer on this topic (link here). And just a last word on CTT whilst we are on the subject, few, if anyone, will speculate right now that quantum computers will actually threaten the original Church-Turing Thesis and in the talk referenced above, Scott does a great job in outlining just why that is the case. Ironically the title of his talk is “Quantum Supremacy” and the quote that I have taken from Google’s website is directly taken from the team led by Hartmut Neven who has stated very publicly that Google will achieve that standard (ie Quantum Supremacy) in 2017.

Coming back to Artificial Intelligence and quantum computing, we should remember that even as recently as 14 to 18 months ago, most people would have been cautious about forecasting the advent of even small scale quantum computing. It is easy to forget, especially in the heady days since mid 2016, but none of Google, IBM or Microsoft had unveiled their advances, and as I wrote last week (here), things have clearly moved on very significantly in a relatively short space of time. Not only do we have an open “arms” race between the West and China to build a large scale quantum machine, but we have a serious clash of some of the most important technology innovators in recent times. Amazingly, scattered in the mix are a small handful of start-ups who are also building machines. Above all however, the main takeaway from all this activity from my point of view is that I don’t think it should be surprising that converting “black-box”neural network outputs into probability distributions will become the focus for anyone approaching DNN from a quantum physics and quantum computing background.

It is this significant advance that means that for the very same reason that Google/IBM/Microsoft talk openly about their plans to build a machine (and in the case of Google an acknowledgement that they have actually now built a quantum computer of their own) means that one of the earliest applications likely to be tested on even proto-type quantum computers will be some aspect of Machine Learning. Corrado was right to confirm that in the opinion of the Google team working at the time, the D-Wave machine was not usable for AI or ML purposes. It was not his fault that his comments were mis-reported. It is worth noting that one of the people most credibly seen as the “grandfather” of AI and Machine Learning, Geoffrey Hinton is part of the same team at Google that has adopted the Quantum Supremacy objective. There are clearly amazing teams assembled elsewhere, but where quantum computing meets Artificial Intelligence, then its hard to beat the sheer intellectual fire power of Google’s AI team.

Outside of Google, a nice and fairly simple way of seeing how the immediate boundary between the theory of quantum machine learning and its application on “real” machines has been eroded can be seen by looking at two versions of exactly the same talk by one of the sector’s early cheer leaders, Seth Lloyd. Here is a link to a talk that Lloyd gave through Google Tech Talks in early 2014, and here is a link to exactly the same talk except that it was delivered a couple of months ago. Not surprisingly Lloyd, as a theorist, brings a similar approach to the subject as WKS, but in the second of the two presentations, he also discusses one of his more recent pre-occupations in analysing large data sets using algebraic topological methods that can be manipulated by a quantum computer.

For those of you who might not be familiar with Lloyd I have included a link below to the most recent form of his talk on a quantum algorithm for large data sets represented by topological analysis.

One of the most interesting aspects that is illuminated by Lloyds position on quantum speed up using quantum algorithms for classical machine learning operations is his use of the example of the “Principal Component Analysis” algorithm (“PCA”). PCA is one of the most common machine learning techniques in classical computing, and Lloyd (and others) have been studying quantum computing versions for at least the past 3 to 4 years.

Finding a use case for a working quantum algorithm that can be implemented in a real use case such as one of the literally hundreds of applications for PCA is likely to be one of the earliest ways that quantum computers with even a limited number of qubits could be employed. Lloyd has already shown how a quantum algorithm can be proven to exhibit “speed up” when looking just at the number of steps taken in classifying the problem. I personally do not doubt that a suitable protocol will emerge as soon as people start applying themselves to a genuine quantum processor.

At Cambridge Quantum Computing, my colleagues in the quantum algorithm team have been working on the subject from a different perspective in both ML and DNN. The most immediate application using existing classical hardware has been from the guys that created ARROW> , where they have looked to build gradually from traditional ML through to DNN techniques for detecting and then classifying anomalies in “pure” times series (initially represented by stock prices). In the recent few weeks we have started advancing from ML to DNN, but the exciting thing is that the team has always looked at ARROW> in a way that lends itself to being potentially upgraded with possible quantum components that in turn can be run on early release smaller scale quantum processor. Using a team of quantum physicists to approach AI problems so they can ultimately be worked off a quantum computer clearly has some advantages.

There are, of course, a great many areas other than the seemingly trivial sphere of finding anomalies in share prices where AI will be applied. In my opinion the best recently published overview of the whole AI space (an incorporating the phase transition to quantum computing) is the Fortune Article (here) that appeared at the end of September and not surprisingly the focus on medical and genome related AI applications for “big” data driven deep learning applications figure highly in that part of the article that focuses on the current state of affairs.

I do not know exactly how far we are away from the first headlines about quantum processors being used to help generate efficiency in at least some aspects of DNN. My personal guess is that deep learning dropout protocols that help mitigate the over-fitting problem will be the first area where quantum computing “upgrades” are employed and I suspect very strongly that any machine that is being put through its paces at IBM or Google or Microsoft is already being designed with this sort of application in mind. Regardless of whether we are years away or months away from that first headline, the center of gravity in AI will have moved because of Quantum Computing.

Source: Artificial Intelligence suddenly got a whole lot more interesting | Ilyas Khan, KSG | Pulse | LinkedIn

Trump’s inaugural cake was commissioned to look exactly like Obama’s, baker says | The Washington Post

Read Trump’s inaugural cake was commissioned to look exactly like Obama’s, baker says by Amy B. Wang and Tim Carman (Washington Post)
Food Network star and celebrity baker Duff Goldman posted a side-by-side comparison of the strikingly similar cakes on Twitter.

Even the firm that hired actors to cheer Trump’s campaign launch had to wait to be paid | The Washington Post

Read Even the firm that hired actors to cheer Trump’s campaign launch had to wait to be paid by Philip Bump (Washington Post)
Quite a coda to the campaign.

Kellyanne Conway finally admits the audit was just an excuse | Vox

Read Kellyanne Conway finally admits the audit was just an excuse (vox.com)
Unless Congress makes him, we’re never going to see those tax returns

Speaking on ABC News’ Sunday morning show This Week, White House counselor Kellyanne Conway finally admitted what had been plainly obvious to anyone paying attention — Donald Trump is never going to voluntarily release his tax returns, so the American people will never know who he is in debt to, whose payroll he is on, or how he is personally benefitting from the policy decisions he makes as President of the United States.

Her rationale for this unprecedented breach of norms is that “we litigated this all through the election” and “people didn’t care.”

Continue reading Kellyanne Conway finally admits the audit was just an excuse | Vox

Data mining the New York Philharmonic performance history

Read Data mining the New York Philharmonic performance history by Kris Shaffer (pushpullfork.com)

How does war affect the music an orchestra plays

The New York Philharmonic has a public dataset containing metadata for their entire performance history. I recently discovered this, and of course downloaded it and started to geek out over it. (On what was supposed to be a day off, of course!) I only explored the data for a few hours, but was able to find some really interesting things. I’m sharing them here, along with the code I used to do them (in R, using TidyVerse tools), so you can reproduce them, or dive further into other questions. (If you just want to see the results, feel free to skip over the code and just check out the visualizations and discussion below.)

All scripts, extracted data, and visualizations in this blog post can also be found in the GitHub repository for this project.

Downloading the data

First, here are the R libraries that I use in the code that follows. If you’re going to run the code, you’ll need these libraries.

library(jsonlite)  
library(tidyverse)  
library(tidytext)  
library(stringr)  
library(scales)  
library(tidyjson)  
library(purrr)  
library(lubridate)  
library(broom)

To load the NYPhil performance data into R, you can download it from GitHub and load it locally, or just load it directly into R from GitHub. (I chose the latter.)

nyp <- fromJSON('https://raw.githubusercontent.com/nyphilarchive/PerformanceHistory/master/Programs/json/complete.json')

Now their entire performance history is in a data frame called nyp!

Tidying the data

The performance history is organized in a hierarchical format ― more-or-less lists of lists of lists. (See the README file on GitHub for an explanation.) It’s an intuitive way to organize the data, but it makes it difficult to do exploratory data analysis. So I spent more time than I care to admit unpacking the hierarchical structure into a flat, two-dimensional “tidy” structure, where each row is an observation (in this case, a piece of music that appears on a particular program) and each column is a variable or measurement (in this case, things like title, composer, date of program, performance season, conductor, soloist(s), performance venue, etc.).

Getting from the hierarchical structure to a tidy data frame was something of a challenge. There are a number of different kinds of lists embedded in the JSON structure, not all of which I wanted to worry about. So I poked around for a while and then created some functions to extract the info I wanted and assign a single row to each piece on a particular program, which would include all of the pertinent details. Here are the custom functions for expanding the list of metadata for a musical work, and then reproducing the general program information for each work on that program. (Note that I left the soloist field included, but still as a list. I’m not planning on using it, but I left it in for future possibilities.)

work_to_data_frame <- function(work) {  
  workID <- work['ID']  
  composer <- work['composerName']  
  title <- work['workTitle']  
  movement <- work['movement']  
  conductor <- work['conductorName']  
  soloist <- work['soloists']  
  return(c(workID = workID,  
           composer = composer,  
           title = title,  
           movement = movement,  
           conductor = conductor,  
           soloist = soloist))  
}  

expand_works <- function(record) {  
  if (is_empty(record)) {  
    works_db <- as.data.frame(cbind(workID = NA,  
                                    composer = NA,  
                                    title = NA,  
                                    movement = NA,  
                                    conductor = NA,  
                                    soloist = NA))  
    } else {  
      total <- length(record)  
      works_db <- t(sapply(record[1:total], work_to_data_frame))  
      colnames(works_db) <- c('workID',  
                              'composer',  
                              'title',  
                              'movement',  
                              'conductor',  
                              'soloist')  
    }  
  return(works_db)  
}  

expand_program <- function(record_number) {  
  record <- nyp$programs[[record_number]]  
  total <- length(record)  
  program <- as.data.frame(cbind(id = record$id,  
                                 programID = record$programID,  
                                 orchestra = record$orchestra,  
                                 season = record$season,  
                                 eventType = record$concerts[[1]]$eventType,  
                                 location = record$concerts[[1]]$Location,  
                                 venue = record$concerts[[1]]$Venue,  
                                 date = record$concerts[[1]]$Date,  
                                 time = record$concerts[[1]]$Time))  
  works <- expand_works(record$works)  
  return(cbind(program, works))  
}

Then I used a loop to iterate these functions over the entire dataset (13771 records through the end of 2016 when I downloaded it, but this is a dynamic dataset that expands as new programs are performed), then save it to CSV and make it into a tibble (a TidyVerse-friendly data frame).

db <- data.frame()  
for (i in 1:13771) {  
  db <- rbind(db, cbind(i, expand_program(i)))  
}  

tidy_nyp <- db %>%
  as_tibble() %>%
  mutate(workID = as.character(workID),
         composer = as.character(composer),
         title = as.character(title),
         movement = as.character(movement),
         conductor = as.character(conductor),
         soloist = as.character(soloist))
tidy_nyp %>%
  write.csv('ny_phil_programs.csv')

This takes a looooooong time to process on a dual-core PC, which is why I was sure to save the results immediately for reloading in the future. Normally I would write a function that could be vectorized (processed on each value in parallel), which takes advantage of R’s (well, really C’s) high-efficiency matrix multiplication capabilities. However, because the input (one record per concert program) and output (one record per piece per program) were necessarily different lengths, I couldn’t make that work. If you know how to do that, please drop me an email or tweet and I’ll be eternally grateful!

After a cup of coffee, or maybe two!, I have a handy tibble of almost 82,000 performance records from the entire history of the NY Philharmonic!

Most common composers and works

With this tidy tibble, we can really easily find and visualize basic descriptive statistics about the dataset. For example, what composers have the most works in the corpus? Here are all the composers with 400 or more works performed, in order of frequency.

This is produced by running the following code.

tidy_nyp %>%  
  filter(!composer %in% c('NULL', 'Traditional,', 'Anthem,')) %>%  
  count(composer, sort=TRUE) %>%  
  filter(n > 400) %>%  
  mutate(composer = reorder(composer, n)) %>%  
  ggplot(aes(composer, n, fill = composer)) +  
  geom_bar(stat = 'identity') +  
  xlab('Composer') +  
  ylab('Number of works performed') +  
  theme(legend.position="none") +  
  coord_flip()

I was surprised to see Wagner on top, even ahead of Beethoven. Tchaikovsky was also a big surprise to me. He’s popular, but I’ve ushered or attended over 200 performances of the Chicago Symphony Orchestra, and Beethoven and Mozart are definitely performed more recently than Wagner and Tchaikovsky by the CSO today. So is this a NYP/CSO difference? Many of my music theory & history friends on Twitter were also surprised to see this ordering, so maybe not. In that case, have things changed over time?

Before looking at trends over time, let’s see if looking at specific works can shed any light. Here are the most performed works (and the code to produce the visualization), correcting for multiple movements listed from the same piece on the same program.

tidy_nyp %>%  
  filter(!title %in% c('NULL')) %>%  
  mutate(composer_work = paste(composer, '-', title)) %>%  
  group_by(composer_work, programID) %>%  
  summarize(times_on_program = n()) %>%  
  count(composer_work, sort=TRUE) %>%  
  filter(n > 220) %>%  
  mutate(composer_work = reorder(composer_work, n)) %>%  
  ggplot(aes(composer_work, n, fill = composer_work)) +  
  geom_bar(stat = 'identity') +  
  xlab('Composer and work') +  
  ylab('Number of times performed') +  
  theme(legend.position="none") +  
  coord_flip()

There are a lot of Wagner operas at the top! (Though it’s worth noting that only a few instances of each are full performances. Instead, most are just the overture or prelude, a common way of opening out a symphony concert.) While many of Wagner’s most performed works are very short (10-minute overtures compared to 30-to-60-minute Beethoven and Tchaikovsky symphonies), and thus Beethoven probably occupies more time on the program than Wagner, the high number of Wagner, and even Tchaikovsky, pieces on NY Phil programs is still surprising to me.

Changes over time

Let’s see how things have changed over time. We can start simply by comparing their early history to their late history. Here are composer counts from 1842 to 1929 and 1930 to 2016 (roughly equal timespans, though not equal numbers of pieces).

Pre-1930:

And post-1929:

To do this, I simply added another filter to tidy_nyp:

filter(as.integer(substr(as.character(date),1,4)) < 1930) %>%

Here we see Beethoven, Tchaikovsky, and Mozart all ahead of Wagner in more recent history, with Wagner dominating (and Mozart missing from) the earlier history.

But we can model this with more nuance. Let’s make a new tibble that contains just the information we need on composer frequency year-by-year.

comp_counts <- tidy_nyp %>%  
  filter(!composer %in% c('NULL', 'Traditional,', 'Anthem,')) %>%  
  mutate(year = as.integer(substr(as.character(date),1,4))) %>%  
  group_by(year) %>%  
  mutate(year_total = n()) %>%  
  group_by(composer, year) %>%  
  mutate(comp_total_by_year = n()) %>%  
  ungroup() %>%  
  group_by(composer, year, comp_total_by_year, year_total) %>%  
  summarize() %>%  
  mutate(share = comp_total_by_year/year_total) %>%  
  group_by(year) %>%  
  mutate(average_share = mean(share))

This produces a tibble that contains a record for each composer-year combination, with fields for:
- composer name
- year
- number of pieces by that composer in that year
- total number of pieces for the year
- composer’s share of pieces for the year
- average composer share for the year (total / number of composers)

With this information, we can then plot the changing frequency of each composer. Here are the top four on a single plot.

We can very clearly see the change in these composers’ frequency of occurrence on the NY Phil’s program over time, with Wagner’s decline very pronounced, and Mozart’s rise (in the twentieth century) clearly evident as well.

However, comparing a composer’s share of the programming year by year isn’t always apples-to-apples. Early on in the Philharmonic’s history, seasons contained far fewer pieces, and thus far fewer composers, than recent years. This has the potential to provide artificially high numbers for composers in sparser years, as seen in the following visualization (and accompanying code).

comp_counts %>%  
  group_by(year) %>%  
  summarize(comp_per_year = n()) %>%  
  ggplot(aes(year, comp_per_year)) +  
  geom_line() +  
  xlab('Year') +  
  ylab('Composers appearing on a program')

To account for this, we can normalize a composer’s share of the repertoire in a given year by dividing it by the average repertoire share for composers in the year. So here is the changing normalized frequency for each of the top four composers on a year-by-year basis.

The same trends can be seen here ― Mozart’s gentle rise and Wagner’s drastic decline ― perhaps even more starkly. In particular, Wagner’s decline from a peak in 1921 to a trough in the 1960s stands out quite strikingly. The decline is the most precipitous in the late 1940s and early 1950s.

And now an explanation begins to emerge.

A number of musicians began to boycott or avoid performing the music of Richard Wagner in the late 1930s, as recounted by conductor Daniel Barenboim. Wagner was known as “Hitler’s favorite composer,” and his music was used prominently in the Reich. The Israel Philharmonic stopped performing his music in 1938, Arturo Toscanini (who occupies a not insignificant share of this dataset as a conductor) stopped performing at Wagner festivals in Bayreuth, etc. Looking at the NY Philharmonic data, it seems like this may be a broader trend.

In addition to Wagner’s decline between WWI and the early Cold War, we can see another significant wartime change, this time an increase. From 1939 to 1946, Tchaikovsky’s share of the NY Philharmonic’s repertoire rose precipitously to his highest (normalized) share in the entire corpus. Could this be due to Russia’s role in the Grand Alliance? I don’t know. I do know that during World War II, then-living Russian composer Dmitri Shostakovich was widely performed in the US as part of a pro-Russia, anti-Nazi wartime propaganda effort (see below). Could Tchaikovsky have been part of that? I don’t know the history of it. But I wouldn’t be surprised. I also wouldn’t be surprised if Tchaikovsky simply filled the role of popular, grand, Romantic composer … who wasn’t German. (Any Tchaikovsky scholars have a perspective to add?)

Conclusion

This is just a start, but I think they’re interesting findings. As a music student and scholar, I never studied performance trends like this. My studies were mostly focused on musical structures and the evolution of compositional styles. But it’s cool to take a different kind of empirical look at musical evolution.

If this code helps you find other insights in the corpus, please drop me a line. I’m sure there’s much more to be mined out of this fascinating corpus.

And thanks to the archivists of the New York Philharmonic for putting this together! Hopefully more major orchestras will release their programming history publicly, so we can start mapping larger trends and make comparisons between them.

Banner image by Tim Hynes.

Walter B. Rudin: "Set Theory: An Offspring of Analysis"

Bookmarked Set Theory: An Offspring of Analysis (YouTube)
Prof. Walter B. Rudin presents the lecture, "Set Theory: An Offspring of Analysis." Prof. Jay Beder introduces Prof. Dattatraya J. Patil who introduces Prof....

MyScript MathPad for LaTeX and Livescribe

Bookmarked MyScript MathPad for LaTeX (myscript.com)
MyScript MathPad is a mathematic expression demonstration that lets you handwrite your equations or mathematical expressions on your screen and have them rendered into their digital equivalent for easy sharing. Render complex mathematical expressions easily using your handwriting with no constraints. The result can be shared as an image or as a LaTeX* or MathML* string for integration in your documents.
This looks like something I could integrate into my workflow.

A WordPress plugin for posting to IndieNews

Bookmarked WordPress IndieNews by Matthias Pfefferle (github.com)
Automatically send webmentions to IndieNews
I just noticed that Matthias Pfefferle has kindly created a little WordPress plugin for posting to IndieNews.

Jetpack 4.5: Monetize your site, brand new VideoPress, and many new shortcodes and widgets | Jetpack for WordPress

Read Jetpack 4.5: Monetize your site, brand new VideoPress, and many new shortcodes and widgets by Richard Muscat (Jetpack for WordPress)
New Jetpack release including site monetization tools, ad-free video hosting, new shortcodes and sidebar widgets.
How 1995! JetPack v4.5 now has a widget for “blog stats: a simple stat counter to display page views on the front end of your site.”


Welcome to Jetpack 4.5, available now for upgrade or installation. We’re starting the year in style with some very exciting additions and improvements that we can’t wait for you to try. This release includes:

  • Jetpack Ads (WordAds)
  • Brand new VideoPress
  • New shortcode support
  • More sidebar widgets
  • An update to our Terms of Service

Continue reading Jetpack 4.5: Monetize your site, brand new VideoPress, and many new shortcodes and widgets | Jetpack for WordPress

Obama’s Secret to Surviving the White House Years: Books | The New York Times

Read Obama’s Secret to Surviving the White House Years: Books (nytimes.com)
In an interview seven days before leaving office, Mr. Obama talked about the role books have played during his presidency and throughout his life.

Not since Lincoln has there been a president as fundamentally shaped — in his life, convictions and outlook on the world — by reading and writing as Barack Obama.

Continue reading Obama’s Secret to Surviving the White House Years: Books | The New York Times

Obama ending special immigration status for migrants fleeing Cuba | The Washington Post

Read Obama ending special immigration status for migrants fleeing Cuba by Karen DeYoung (Washington Post)

The new policy eliminates a special parole period that allows them entry to wait for U.S. residence and ends the “wet-foot, dry-foot” policy.

The Obama administration, in one of its final foreign policy initiatives, on Thursday ended the special status accorded migrants fleeing Cuba who, upon reaching this country, were automatically allowed to stay.

Cubans are still covered by the 1966 Cuban Adjustment Act, which grants them permanent residency — a green card — after they have been here for one year. Until now, they were given temporary “parole” status while waiting for that year to pass. That will no longer be granted, making the act moot for most by denying them entry on arrival.

In this March 22, 2016 photo, President Barack Obama speaks at the Grand Theater of Havana, Cuba. (Desmond Boylan/AP)

Effective immediately, President Obama said in a statement, “Cuban nationals who attempt to enter the United States illegally . . . will be subject to removal,” treating them “the same way we treat migrants from other countries.”

More than a million Cubans have come to this country, many of them in vast exoduses by sea, since the island’s 1959 revolution. More than 250,000 have been granted residency under the Obama administration under the law, which can only be repealed by Congress.

The new rule on parole applies to Cubans attempting to enter the United States without visas by sea or by land through Mexico or Canada.

It ends the “wet-foot, dry-foot” policy, adopted by the Clinton administration in 1996 at a time when illegal seaborne migrants were flooding across the Florida Straits. That policy differentiated between those reaching U.S. soil — who were allowed to stay — and those intercepted at sea by the U.S. Coast Guard, who were returned to Cuba or sent to third countries.

The policy was agreed upon with the Cuban government, which issued a statement calling it “an important step in the advance of bilateral relations” that will guarantee “regular, safe and orderly migration.” The government has long complained about the special status for Cubans, particularly the “wet-foot, dry foot” policy, which it said encouraged illegal travel in unseaworthy vessels, homemade rafts and inner tubes.

As part of the accord announced in both capitals, Cuba will allow any citizen who has been out of the country for up to four years to return. Previously, anyone who had been gone for more than two years was legally said to have “emigrated.” The Cuban statement said efforts to “modernize” immigration policies would continue.

The White House described the changes as a logical extension of the normalization of relations with Cuba that began in December 2014, when Obama and Cuban President Raúl Castro announced they would end more than a half-century of estrangement. Since then, U.S.-Cuba diplomatic relations have been ­reestablished, and Obama has used his regulatory authorities to ease long-standing restrictions on commerce and trade, as well as travel by U.S. citizens to the island, under the continuing U.S. embargo.

The latest change comes as President-elect Donald Trump has indicated his unhappiness with increased Cuba ties and has threatened to reverse normalization. “If Cuba is unwilling to make a better deal for the Cuban people, the Cuban/American people and the U.S. as a whole, I will terminate deal,” Trump tweeted in late November, after the death of Cuban revolutionary leader Fidel Castro, the current president’s brother.

If he chose to do so after taking office, Trump could order the Department of Homeland Security to reinstitute special treatment for Cuban migrants.

Lawmakers long opposed to the new relationship with Cuba expressed displeasure at the new policy. “Today’s announcement will only serve to tighten the noose the Castro regime continues to have around the neck of its own people,” Sen. Robert Menendez (D-N.J.) said in a statement.

“Congress was not consulted prior to this abrupt policy announcement with just nine days left in the administration,” Menendez said. “The Obama administration seeks to pursue engagement with the Castro regime at the cost of ignoring the present state of torture and oppression, and its systematic curtailment of freedom.”

Benjamin Rhodes, Obama’s deputy national security adviser, said that plans for the change were kept quiet in large part to avoid a new flood of Cubans trying to enter — many of them trying to beat a deadline they feared was the inevitable next step in U.S.-Cuba rapprochment under the current administration.

The total number of Cubans admitted after reaching here without visas by land or sea was 4,890 in 2013, according to Customs and Border Protection. In 2016, the number was 53,416.

According to the Coast Guard, 1,885 people traveling by sea have either arrived here or been intercepted — and sent back — in fiscal 2017, which began Oct. 1.

Thousands of others have joined a growing stream of Central Americans who have made the arduous journey through Mexico, often after paying hefty sums to smugglers, to reach the U.S. border. While Cubans have been allowed to cross, others, largely from Guatemala and El Salvador, have been turned back.

“The aim here is to treat Cuban migrants in a manner consistent to migrants who come here from other countries . . . equalizing our immigration policies . . . as part of the overall normalization process with Cuba,” said Homeland Security Secretary Jeh Johnson. “Our approach to Cubans arriving [at the border] tomorrow will be the same as those arriving from other countries.”

Rhodes said the change was also justified because, while many Cubans in the past left the island “for political purposes . . . I think increasingly over time the balance has shifted to those leaving for more traditional reasons,” such as “economic opportunity.”

“That is not to say there are not still people who have political cause to leave Cuba,” he said. As with other countries, Rhodes said, “political asylum continues to be an option.” Adjudication of asylum claims of political or other persecution normally takes several years, allowing time to be granted a green card under the Cuban Adjustment Act before there is even a ruling on the claim.

The Cuban government continues to arrest dissidents and restrict civil liberties, including political and press freedoms. At the same time, however, it has slowly loosened its grip on the economy — allowing the growth of a private sector — and liberalized some other restrictions.

Sen. Patrick J. Leahy (D-Vt.), who has long advocated rapprochement with Cuba, said in a statement that “this is a welcome step in reforming an illogical and discriminatory policy that contrasted starkly with the treatment of deserving refugees from other countries. Refugees from all countries should be treated the same way, and now they will be. That’s the American way.”

Engage Cuba, a coalition of private U.S. companies and organizations working to end the trade embargo still in place against Cuba, called it “a logical, responsible, and important step towards further normalizing relations with Cuba.”

The new agreement also ends the Cuban Medical Professional Parole Program, adopted under the George W. Bush administration, which targeted Cuba’s policy of sending medical professionals abroad as a form of humanitarian aid by encouraging them to defect. The program allowed U.S. embassies abroad to accept them for U.S. migration.

A U.S. lottery that gives green cards to 20,000 Cubans on the island each year remains in place, Rhodes said.

Source: Obama ending special immigration status for migrants fleeing Cuba – The Washington Post

🔖 Green’s Dictionary of Slang

Bookmarked Green’s Dictionary of Slang (greensdictofslang.com)
h/t The Largest Historical Dictionary of English Slang Now Free Online: Covers 500 Years of the “Vulgar Tongue” | Open Culture.

greens-dictionary-of-slang

“The three volumes of Green’s Dictionary of Slang demonstrate the sheer scope of a lifetime of research by Jonathon Green, the leading slang lexicographer of our time. A remarkable collection of this often reviled but endlessly fascinating area of the English language, it covers slang from the past five centuries right up to the present day, from all the different English-speaking countries and regions. Totaling 10.3 million words and over 53,000 entries, the collection provides the definitions of 100,000 words and over 413,000 citations. Every word and phrase is authenticated by genuine and fully-referenced citations of its use, giving the work a level of authority and scholarship unmatched by any other publication in this field.”

If you head over to Amazon.com, that’s how you will find Green’s Dictionary of Slang pitched to consumers. The dictionary is an attractive three-volume, hard-bound set. But it comes at a price. $264 for a used edition. $600 for a new one.

Now comes the good news. In October, Green’s Dictionary of Slang became available as a free website, giving you access to an even more updated version of the dictionary. Collectively, the website lets you trace the development of slang over the past 500 years. And, as Mental Floss notes, the site “allows lookups of word definitions and etymologies for free, and, for a well-worth-it subscription fee, it offers citations and more extensive search options.” If you’ve ever wondered about the meaning of words like kidlywink, gollier, and linthead, you now know where to begin.