September 4, 2021

Dear Google: This is not duplicative. It is the newest CSS there is. In an instructive post using older posts, I demonstrate to W3C, W4C. To CSS3, CSS4, To Sass, Sassy. Crawl away. Thanks. mrjyn

Power corrupts absolutely, but fun absolves a dangerous automobile--Google: This is not duplicative. It is the newest CSS there is.  I demonstrate to W3C, W4C. To CSS3, CSS4. Crawl away.


911 PORSCHE TURBO 3.3-liter 1978 Power corrupts absolutely, but fun absolves a dangerous automobile--mrjyn

    Back in 2015, software engineer Jacky Alciné pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem. But, as a new report from Wired shows, nearly three years on and Google hasn’t really fixed anything. The company has simply blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
    Wired says it performed a number of tests on Google Photos’ algorithm, uploading tens of thousands of pictures of various primates to the service. Baboons, gibbons, and marmosets were all correctly identified, but gorillas and chimpanzees were not. The publication also found that Google had restricted its AI recognition in other racial categories. Searching for “black man” or “black woman,” for example, only returned pictures of people in black and white, sorted by gender but not race.

        Google Photos, y'all fucked up. My friend's not a gorilla. pic.twitter.com/SMkMCsNVX4
        — Jacky Alciné (@jackyalcine) June 29, 2015 

    A spokesperson for Google confirmed to Wired that the image categories “gorilla,” “chimp,” “chimpanzee,” and “monkey” remained blocked on Google Photos after Alciné’s tweet in 2015. “Image labeling technology is still early and unfortunately it’s nowhere near perfect,” said the rep. The categories are still available on other Google services, though, including the Cloud Vision API it sells to other companies and Google Assistant.
    It may seem strange that Google, a company that’s generally seen as the forerunner in commercial AI, was not able to come up with a more complete solution to this error. But it’s a good reminder of how difficult it can be to train AI software to be consistent and robust. Especially (as one might suppose happened in the case of the Google Photos mistake) when that software is not trained and tested by a diverse group of people.
    It’s not clear in this case whether the Google Photos algorithm remains restricted in this way because Google couldn’t fix the problem, didn’t want to dedicate the resources to do so, or is simply showing an overabundance of caution. But it’s clear that incidents like this, which reveal the often insular Silicon Valley culture that has tasked itself with building world-spanning algorithms, need more than quick fixes. 

    The Existing Literature on Medical Futility
    In 1990, Schneiderman, Jecker and Jonsen proposed a notion of medical futility based upon quantitative evaluations of the efficacy (or, more precisely, failure) of various end-of-life medical treatments:
  



|“Pain that is harrowing, constant, incurable, and of such severity that it dominates nearly each acutely aware moment, produces mental and physical enfeeblement, and will manufacture a need to kill for the sole purpose of stopping the pain.”|

My Own Definition

Following the examples of California and Texas many states have adopted laws and regulations using the term IP. About a decade ago I personally expanded the traditional definition of IP for my own patients and began to educate others that IP patients are the most severe and needy of pain patients.

I am truly shocked by the number of physicians and other practitioners who prescribe opioids but aren’t aware of their states’ definition regulations and legislation concerning IP. Very few opioid prescribers are aware that IP is defined in federal control substance regulations. I’m further shocked and dismayed that very few continuing education courses conferences and guidelines written by professional associations even mention the word intractable. Put another way the most basic principle of pain management is whether the patient is intractable incurable and does or does not respond to standard therapies and dosages.


Alphabet Soup of Definitions

The alphabet soup of pain definitions names and descriptions is mind-boggling and has overlooked the basic purpose and concept of IP laws and regulations. In my readings this past week I came across these names in medical literature as applied to pain and its descriptions: persistent acute chronic breakthrough neuropathic incident spontaneous nociceptive central referred centralized radiculopathy allodynia hyperalgesia hyperpathia dysaesthesia myofascial visceral and lancinating.

All these clinical names are fine but none of them clearly imply whether the patient’s pain is or is not curable. Recent controversies abound over the use of opioids in the treatment of chronic non-cancer pain as evidenced by the promulgation of treatment guidelines restrictions of supplies and dosages and the current epidemic of abuse diversion and overdose.

Lost in the multitude of writings and debates involving these issues however is the simple question “Is the patient’s pain curable or incurable?” One of the first jobs of a pain practitioner is to determine and record this fact in a chart.

In the past 20 years I’ve had the displeasure of reviewing an abundance of patient charts compiled by physicians who have regulatory legal or malpractice problems. The basic failing is almost always that nowhere in the chart is there a declaration of intractable or incurable pain and the physician has simply attempted to prescribe treatment on purely symptomatic grounds.

The original and basic concept of declaring a patient’s pain intractable is to allow the patient and physician to try non-standard treatments including high doses of opioids if warranted. Implicit in all the states intractable pain laws and federal regulations is that the physician must document intractable and incurable pain in the record and show the patient has tried and failed standard therapies and dosages. Today we’ve got plenty of agents to try before resorting to opioids and invasive interventions to treat pain but the concept of a Patient’s Bill of Rights continues

My message is straightforward. After you have described (or identified) the cause of pain (neuropathic nociceptive centralized etc.) make a determination as to whether the patient does or doesn’t have an intractable (incurable) pain.

Document this fact in the patients chart in clear language that even a 5th grader can interpret. Every prescription report and prior authorization should have IP noted on it if applicable to educate all concerned parties that the patient being treated is special and unique. Intractability and curability are far more important to patients families and regulators than to know if hyperalgesia or neuropathy is present.

    "... bawdy and uncommonly inappropriate writing, thus hilarious-micro-chops of synced, pious, redacted craprolls.
    His Elvis, if you will, is huffed on, rubbed between legs, and insufflated like a meme kitten, or persistent, acute, chronic, breakthrough, neuropathic, spontaneous, nociceptive, referred, centralized, radiculopathy, allodynia, hyperalgesia, hyperpathia, synesthesia, myocardial, visceral ratiocinating coke rails."@nytimesarts

https://t.co/lAsSjwQIMy

— mrjyn (@mrjyn)22. Mai 2021

The late-day canvas the stars call evening are not watchmakers. When Porsche announced its first turbocharged production model in 1974 – the 911 Turbo, known as the 930 –was an occasion of shock and awe.

The first turbocharged production model in 1974 –

    the iconic Porsche Turbo is one of those cars which live to inform tales forever history.
    —infinity is a massive topic. 

the majority have some conception of things that have no certain boundary, no limit, no end.

The rigorous study of eternity began in arithmetic and philosophy, but the engagement with time traverses the history of cosmology, astronomy, physics, and theology.

in the natural and social sciences the infinite typically appears as a consequence of our theories themselves. cunning hymenopteran. while not infinity, not eternity, like mathematics, as vellication, parenthetic.

Engine as revolution front spoiler alert wide flared bubble Cord show fenders and rear wing port. aesthetic grade wheels and suspension lace auto-turbocharged engine.

Driver drove turbo as Driver drove turbo as "long as tea break Meretricious Americas explicit Turbo classic cocktail livery joint expertise is visceral. 

Owner Grant Barnes says "You cannot concentrate to 2016 standard 260 H.P. high speed one hundred fifty five mph.

 don’t sound so impressive.

his was the ’70s era squeeze of pollution micromanages overwhelming efforts PowerPoint engines holidaymakers.

Today turbochargers progressive go-to answer or boost output face of fuel laws almost every Porsche sports automobile will be fitted.

Turbocharger arrives early in the Nineteen Sixties but 911 Turbo flat-six as avant-garde day transcendentalist of ’70s 917/10 917/30 Can-Am.

racers turbocharged flat-12s 917/30 rated 1100 H.P. 1973.

911 Turbo codswallop widespread wheelies “whale tail” spoiler.

    “Most people had billboards of cars and Farrah Fawcett in their rooms.

    10-year members of big apple chapter Porsche underground America”. 

Guzman's primary car arrived on mark.

“Back then turbo was extreme -- cool -- leading edge.

Now it simply cognomen.

Larger 3.3-liter engine revved 1978 convey 930 rating 300.

“When the turbo kicker neckback crossed World Health Organization inopportune struck back their bones in picket lines for AIDS.

    “Most people my age had a poster of that car in their rooms when they were kids – that and Farrah Fawcett” said Andy Guzman a 10-year member of the Metro New York chapter of the Porsche Club of America.
    Guzman was in grade school when the car arrived on the market. “Back then the turbo was really cool and cutting edge. Now it’s just become normal.
    A larger 3.3-liter engine arrived in 1978 bringing the 930’s power rating up to 300.
    “When the turbo kicked in it would snap your neck back” Guzman who owned an ’86 930 said. “Power wasn’t very linear; it was like there was an on-off switch. But that was the fun of it; it was a dangerous car.”
    Porsche introduced a slightly more powerful turbo 911 – the 964 – in 1991. It was the last rear-wheel drive turbo 911 and boasted an improved suspension and better handling. The last air-cooled 911 turbo was the 993 which was offered from 1995 until the end of the millennium.
    Wider-bodied water-cooled Porsche 911s hit the market in 2000 – much to the chagrin of air-cooled purists Guzman said – in the form of the 996. They were followed in 2006 by the 997 and in 2010 by the 997.2 which featured a redesigned 500-horsepower 3.8-liter engine and an optional seven-speed double clutch automatic – gasp! – gearbox.
    Now Porsche offers turbocharged engines on most of its lineup. Why? The reason is simple. As Porsche executives have pointed out turbocharging helps the company build smaller more efficient engines that can still dish out heaps of power when called for. A glance at the automaker’s product catalog shows that turbocharging makes possible a 580-horsepower car – the 911 Turbo S – that will go from zero to 60 in less than 3 seconds on its way to a top speed of 205 mph. All that with the possibility of 24-mpg highway fuel consumption dependent upon right foot restraint of course.
Porsche’s engineering chief told Top Gear last year that the company’s race-bred hybrid brainpower would show up in its production cars in the future once again tracing the route from racetrack to road cars. “People are afraid of change” Guzman observed a statement that was true for Porsche owners in the switch from air cooling to water cooling. “But once they see what it can do they get used to it.” 

    Now imagine as happened to Google Whose algorithm fit A “gorilla” FOR the image of a black person

    pas l'infini comme mathématiques dont l'oie tic parenthétique.

characterization of the data associated with each trend along a number of key characteristics including social network features time signatures and featureless.

This improved understanding of emerging information, Twitter in particular. in general allow research to design and create new tools to enhance the stage in-formation including filtering search and visceral-time SASS information as it pertains to local geographic communities.

    To this end we begin with an introduction to Twitter and review of related efforts and background to this work. We then formally describe our dateset of Twitter trends and their associated messages. Later we describe a qualitative study exposing the types of trends found on Twitter. 

    Finally in the bulk of this article we identify and analyze emerging trends using the unique social temporal and textual SAS service with tens of millions of registered users as of June 2010.  A user’s messages are displayed Asa “stream” on the user’s Twitter page.

    In terms of social connectivity Twitter allows user to fol-low any number of other users.The-twitter contact network directed: u Sera can follow user B without requiring approval or a reciprocal connection from user B. 

    Users can set theatricality preferences so that their updates are available ontology each user’s followers

        By default the posted messages are-available to anyone. In this work we only consider messages-posted publicly on Twitter. Users consume messages mostly by viewing a core page showing a stream of the latest mes-sages from people they follow listed in reverse chronological order

    The conversational aspects of Twitter play a role in urinalysis of the Twitter temporal trends. Twitter allows several for users to directly converse and interact by referent-each other in messages using the @symbol. Wetwares from one user that is “forward” by a condenser to second user follower commonly using the “RT@rname” text as prefix to originality (or previous) oyster (e.g. “R@justifiable Tomorrow morning watch how”). 

    Replicas message pheromone user response to cause message anti-id'd by facts with the replied-user-username (e.g. “@wash out ur new twister rends”). 

    Finally lamentations, inclusiveness, other name of the message (e.g. “ad-singalong @informer”). 

    Twitter allows us to seawall recent messages in retweet. 

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—May 2011 

    Now imagine as happened to Google algorithm fit A “gorilla” FOR the image of a black person

    That’s wrong but it’s categorically differently wrong from simply fitting “airplane” to the same person. How do you write the loss function that incorporates some penalty for racially offensive results? Ideally you would want them to never happen so you could imagine trying to identify all possible insults and assigning those outcomes an infinitely large loss. Which is essentially what Google did — their “workaround” was to stop classifying “gorilla” entirely because the loss incurred by misidentifying a person as a gorilla was so large.  

    © 2021 Twitter

A patient goes to see a doctor. The doctor performs a test with 99 percent reliability--that is 99 percent of people who are sick test positive and 99 percent of the healthy people test negative. The doctor knows that only 1 percent of the people in the country are sick. Now the question is: if the patient tests positive what are the chances the patient is sick?

More generally Bayes's theorem is used in any calculation in which a "marginal" probability is calculated (e.g. p(+) the probability of testing positive in the example) from likelihoods (e.g. p(+|s) and p(+|h) the probability of testing positive given being sick or healthy) and prior probabilities (p(s) and p(h)): p(+)=p(+|s)p(s)+p(+|h)p(h). Such a calculation is so general that almost every application of probability or statistics must invoke Bayes's theorem at some point. In that sense Bayes's theorem is at the heart of everything from genetics to Google from health insurance to hedge funds. It is a central relationship for thinking concretely about uncertainty and--given quantitative data which is sadly not always a given--for using mathematics as a tool for thinking clearly about the world.

    The importance of accurate data in quantitative modeling is central to the subject raised in the question: using Bayes's theorem to calculate the probability of the existence of God. Scientific discussion of religion is a popular topic at present with three new books arguing against theism and one University of Oxford professor Richard Dawkins's book The God Delusion arguing specifically against the use of Bayes's theorem for assigning a probability to God's existence. (A Google news search for "Dawkins" turns up 1890 news items at the time of this writing.) Arguments employing Bayes's theorem calculate the probability of God given our experiences in the world (the existence of evil religious experiences etc.) and assign numbers to the likelihood of these facts given existence or nonexistence of God as well as to the prior belief of God's existence--the probability we would assign to the existence of God if we had no data from our experiences. 

    Dawkins's argument is not with the veracity of Bayes's theorem itself whose proof is direct and unassailable but rather with the lack of data to put into this formula by those employing it to argue for the existence of God. The equation is perfectly accurate but the numbers inserted are to quote Dawkins "not measured quantities but & personal judgments turned into numbers for the sake of the exercise." 

    Note that although this is receiving much attention now quantifying one's judgments for use in Bayesian calculations of the existence of God is not new. Richard Swinburne for example a philosopher of science turned philosopher of religion (and Dawkins's colleague at Oxford) estimated the probability of God's existence to be more than 50 percent in 1979 and in 2003 calculated the probability of the resurrection [presumably of both Jesus and his followers] to be "something like 97 percent." 

    (Swinburne assigns God a prior probability of 50 percent since there are only two choices: God exists or does not. Dawkins on the other hand believes "there's an infinite number of things that some people at one time or another have believed in ... God Flying Spaghetti Monster fairies or whatever" which would correspondingly lower each outcome's prior probability.) 

    In reviewing the history of Bayes's theorem and theology one might wonder what Reverend Bayes had to say about this and whether Bayes introduced his theorem as part of a similar argument for the existence of God. But the good reverend said nothing on the subject and his theorem was introduced posthumously as part of his solution to predicting the probability of an event given specific conditions. 

One primary scientific value of Bayes's theorem today is in comparing models to data and selecting the best model given those data. For example imagine two mathematical models A and B from which one can calculate the likelihood of any data given the model (p(D|A) and p(D|B)). For example model A might be one in which spacetime is 11-dimensional and model B one in which spacetime is 26-dimensional. 

Once I have performed a quantitative measurement and obtained some data D one needs to calculate the relative probability of the two models: p(A|D)/p(B|D). 

    Note that just as in relating p(+|s) to p(s|+) I can equate this relative probability to p(D|A)p(A)/p(D|B)p(B). To some this relationship is the source of deep joy; to others maddening frustration. 

    The source of this frustration is the unknown priors p(A) and p(B). What does it mean to have prior belief about the probability of a mathematical model? Answering this question opens up a bitter internecine can of worms between "the Bayesians" and "the frequentists"a mathematical gang war which is better not entered into here. To oversimplify "Bayesian probability" is an interpretation of probability as the degree of belief in a hypothesis; "frequentist probability is an interpretation of probability as the frequency of a particular outcome in a large number of experimental trials. In the case of our original doctor estimating the prior can mean the difference between more-than-likely and less-than-likely prognosis. In the case of model selection particularly when two disputants have strong prior beliefs that are diametrically opposed (belief versus non-belief) Bayes's theorem can lead to more conflict than clarity.