Tuesday, December 27, 2016

27 December - Netvibes 3 - 3


HOW MUCH IS ENOUGH? - If you go up to the average person and ask them if they would like a million dollars, the not surprising and almost universal response would be, "But of course!" In proceeding further, by doubling the amount offered, you would still be hard pressed to find anybody who would ever say at any point, "You know...no... that's enough. In all honesty, I really don't think I will ever need more than what I already have." But it occurs to me that people in just fulfilling their human predisposition of taking too much will nonetheless not leave enough for everybody else. Might this just explain why the world is in such neverending chaos? Clearly evident from our history- and our present- where it doesn't take a Rhodes scholar to understand that extreme disparity of wealth leads to endless conflict and terrorism. Logically, endemic strife is the only possible reaction left for those who find no place for themselves under law in the present corporate ever increasing exclusively profit driven economic system. If you apply pure logic to this contradiction, between ever increasing profit juxtaposed to the imparative of sustainability, it would stand to reason that at some point rather early on in this doubling of wealth example presently only offered to the one-tenth of one percent of the population, you arrive at a point where any economist would tell you that there will be a diminishing return. That is, where the next dollar you are given, earn, or steal would actually diminishes both your own physical and mental well-being and those of literally everyone else cohabitating on this finite globe. So why do the vast majority of people- especially the obscenely rich- seek exclusive economic control and preclude others from ever benefiting along with them from greater and greater amounts of capital? Amounts which objectively are beyond any reasonable amount that could ever measurably improve their lives? Arguably, there are two basic reasons, related to human evolution, that if not fully addressed and evolved through in the near future will preclude humans from remaining a viable life form on this planet: 1. As a life form that relatively recently evolved from hunter gathers, we have not as yet become accustomed or enlightened to the notion that  through phenomena like the relatively recent science driven industrial revolution, we are no longer susceptible to instability in our food supply or diseases that literally made human survival questionable in the relatively recent past. 2. The exclusive control of then limited capital in the past to  determine who would eat and who would not, mediated through war and domination, might have been necessary then, but really has no place in a 21st century where for the first time we literally have the ability to fulfill all reasonable human needs for the first time in history. And yet, we somehow continue to implement the obsolete mechanisms of the past like war, which now unnecessarily and ironically consumes the capital necessary to finally make these endless wars an outmoded and no longer practical remnant of the past. Unlike Karl Marx, I don't look to the proliteriat to understand and redress a socio-economic system that continues to exploit all of us. I don't, because recent history has shown that the indemic poverty of the working class makes them all the more susceptible to the disease of greed, which is expressed through the amassing of wealth far beyond amounts that any individual might find useful. Or simply stated, the present corruption of the Left in both Europe and the United States- the most recent example of which is the Democratic party backing business-as-usual Hillary instead of Bernie- has been right up there on a par with that of the Right with whom they have colluded. I think we actually might have a better chance of turning things around by addressing our concerns to the one-tenth of one percent at the top of the economic heap, who presently continue unabated to undermine what ultimately- in the not too distant future- will also be their own undoing? They at least have the education, which is a prerequisite for understanding what is going on. And clearly, this education is no longer available in our purposefully undermined and privatized public schools The argument I would make to the rich- if ever given the opportunity- is that the amassing of wealth is and always has been a psychological hedge the rich employ in avoiding their ultimate fear of death. I call it the Pyramid Syndrome, where pharoahs of the past- and present- think that they can avoid the grim reaper by building enough pyramids- or their modern day equivalents- and filling them with obscene amounts of wealth that they never derive any life improving utility from. What is the cost of owning and controlling assets that for all intents and purposes you not only cannot use, but which actually have the effect of destablizing the majority of the remaining population in countries like Iraq, Afghanistan, and the rest of what Naomi Kline calls The Shock Doctrine world? Canabalizing these countries economies to the point where the people's only redress of grievance against the uber rich is to engage in endless war and/or terrorism is their only logical, if regretable choice- they are given no other viable option. Is it possible to redistribute the unused capital of the uber rich without in any way, shape, or form impacting rich people's real net worth and wealth? Think of an Airbnb or Uber model, where the capital investment of a house or car is shared in a way that maximies the value of the underlying asset without in any way diminishing its real worth to the owner. If Donald Trump has a house that he occupies for two weeks in a year, what loss would he incur if the house was occupied for the remainder of the year by others, who not only got utility out of the house, but also paid the maintaince costs to the point of having a net freeing up of capital for other more productive uses? Have we forgotten that the genesis of what we historically refer to as human civilization came only after an interminable dark ages. Wasn't it supposed to be founded on the rule of law? A society where one could seek and attain redress of grievance under law without having to commit what we now call terrorism? Have we so soon forgotten that which at an earlier time kept humanity in such chaos and anarchy that social development- with a prerequisite of peace- could never be maintained long enough to allow a renaissance to occur? Have those running this society so soon forgotten that the real reason for the rule of law was not based on its moral underpinning expressed by religion, but rather on the marked improvement it offered for the chances of human survival? The Devil and G-d have been having a longstanding argument about the nature and future of humanity. The disagreement is over whether man, armed only with his wits and free will, can ultimately divine a possible purpose to his existence. That is, whether or not some manifestation of immortality and heaven can be attained in this lifetime or whether we are doomed to be born, live, and die without ever achieving peace with insight into ourselves and the others we share this planet with, who in the final insight are only different aspects of ourselves. One way or the other- be it secular or religious- end of these unsustainable days is on the horizon. We can argue about whose vision of the apocalypse is more accurate or we can finally learn the lesson that might just allow us all to evolve into the next level of human consciousness that brings heaven to earth, while the only thing getting destroyed is human greed.26 Dec
DOES A FREE PRESS STILL EXIST FOR TRUMP TO ATTACK? - While I more than share the concerns of Professor Robert Reich and others that find an impending Donald Trump presidency to be a scary prospect, I do not find it productive for Reich to fabricate a fantasy ideal of a supposed free press in the United States, that if the truth were to be told, has not existed long before Trump came on the scene to attack. On today's Democracy Now ex-Secretary of Labor and now Berkeley Professor Robert Reich vilified president elect Trump without ever acknowledging that this country doesn't have a free press. How could Trump threaten a free press that on its own gave him what is estimated as $1 billion in free publicity during the recent election, while openly and knowingly abrogating its responsibility to report the real news, that Reich continues to falsely assume they did. Could it be that when it comes to the real nature of the press, Reich is as truth challenged as he accuses Trump of being? Donald Trump is not turning the public against the media as Reich asserts, rather a corporate dominated press that exclusively reports corporate party line has already done this to itself. Endless wars for the sole purpose of corporate profit have never found it difficult to get the mainstream media's unquestioning support. This reality of the co-option of the 4th Estate took place long before Trump. But it did give Trump an opening to assert- without fear of contradiction by a lied to public- that "the press are liars...they're terrible people." What would you call a press that goes along with wars in Iraq or Afghanistan justified by the known falsehood of "weapons of mass destruction" or by refusing to cover the multiplicity of domestic issues that never seem to see the light of day on an evening news that now cover little of relevance, while having well over 50% of its broadcast time given over to commercial advertisement. If the election results prove anything, it's that the public was already against the media, because it doesn't report the news. Rather, it tells us lies that Bernie Sanders "is not a serious candidate-" much like Donald Trump- and that "all the polls show Hillary Clinton is going to win. Has the mainstream media ever acknowledged its lies? Are these lies any worst than Trump's? At least Trump doesn't believe he's telling the truth and his supporters don't seem to give a damn. In this context, when Trump says, "The press are liars...they're terrible people," is Trump wrong? What is the difference in the "fact free universe" that Robert Reich accuses Trump of living in and the fact free universe Reich's idealized media has now created for generations by failing to correct the known falsities that both Democratic and Republicans have engaged in without an independent 4th Estate keeping them honest? If Robert Reich really believes- and I think he does- that "freedom of the press depends on the tacit norms and understandings of a free society," it is incumbent upon him and others in the Democratic party and elsewhere to acknowledge that these norms were abrogated long before Trump was on the scene. And if Robert Reich, Bernie Sanders, or other good people want to see this situation turned around by Our Revolution, we need to clean our own house first as a prerequisite to going after The Donald and the nadir of consciousness he incarnates. 20 Dec
LAUSD'S FALSE TEACHER OVERPAYMENT CLAIMS CLEARLY VIOLATES PROVISION OF COLLECTIVE BARGAINING AGREEMENT - In its incessant war against high seniority teachers, whose only crime is that they were at the top of the salary scale, the Los Angeles Unified School District (LAUSD) administration has now been bringing bad faith claims of teacher overpayment simply because they know that most teachers lack the financial wherewithal to contest these bogus claims and that their union United Teachers Los Angeles (UTLA) has made it abundantly clear that it will do nothing to aid these retired and fired teachers even though these bogus claims for overpayment are alleged to have occurred when these teachers will still UTLA dues paying employees of LAUSD. Even though the only document that is controlling in determining how much a teacher should be paid is the Collective Bargaining Agreement (CBA), which LAUSD has ignored, while making up out of the whole cloth an hourly and daily rate of compensation that in no way reflects the CBA. But even if there was some justiciable dispute regarding alleged teacher overpayment, it is NOT an exception to the requirement to exhaust all administrative remedies before going into Superior Court as LAUSD has incorrectly and purposeful done because: (a) UTLA and LAUSD are parties to, and bound by the terms of, a collective bargaining agreement; (b) teachers are UTLA members who are only bound by the terms of the CBA and not some solely determined interpretation arrived at on its own by LAUSD; (c) compensation is a topic specifically within the jurisdiction of the Public Employment Relations Board (PERB) and not Superior Court until all administrative law remedies are exhausted; (d) this dispute involves interpreting the terms of the CBA because the CBA states teachers are paid an annual "salary", not an "hourly" wage as LAUSD has fabricated on its own in order to advance this bogus claim of teacher overpayment; (e) even if LAUSD could arguably try to collect overpayments based on its "calculations", the calculations involve interpretation of the CBA- an additional reason why the issues must first be submitted to PERB and not Superior Court;(f) the CBA has a "3-year" limitation on recovery of overpayment and a large number of the LAUSD bogus assertions of overpayment are way over this limitation; and, (g) in asserting claims of overpayment against teachers, the District NEVER produces any ADMISSIBLE EVIDENCE to show which days were worked, the number of hours worked, etc. Think about this - the CBA imposes a MINIMUM of 6 on-site hours per day and explicitly states that teachers are expected to work more than 6 hours per day. Furthermore, although teachers are required  to sign in, the purpose is to record attendance (and, of course, to build a discipline file). However, the CBA does not require teachers to keep a record of every minute they spend doing school work outside the classroom. Furthermore, it is relevant that there is no language in the CBA suggesting or requiring teachers to account for every hour they work -- that is the purpose of being paid an annual and not an hourly salary, which is clearly the case under the CBA regarding teacher compensation. In discussing this matter with attorney Ron Lapekas: Ron Lapekas, esq. TeacherLawLA@yahoo.com (213) 342-8560 who is representing a significant number of teachers who LAUSD is unjustifiably seeking restitution for in its false allegations of overpayment, LAUSD alleges it has calculated an "overpayment." Again, the only "evidence" provided to attorney Ron Lapekas or any of the teachers LAUSD is seeking money from is a simple allegation in the form of printout with no proof of the overpayment claims. This is so even when Lapekas asked for detailed evidence. But the District appears to believe that it only needs to divide the annual salary payable under the CBA by the number of "contract hours"- it has calculated on its own and outside negotiations under CBA- by multiplying the number of attendance days by 8. However, as every teacher knows, they do not work an 8 hour a day / 5 day a week assignment. LAUSD intends and demands that teachers spend countless hours meeting parents after school, making phone calls, grading papers at home, and engaging in other LAUSD duties on weekends, holidays, etc. Unless these hours are also considered, it is mathematically impossible to calculate an exact number of work hours because LAUSD not only does not include even an estimate of these hours, but also LAUSD doesn't ask for the information from teachers as to how many additional hours outside of formal school time teachers spend working.  Indeed, flipping LAUSD's argument upside down, if the district were to include all the hours in excess of 8 per day and all hours worked on days when attendance was neither required nor monitored, It is a statistical certainty that almost all teachers will work more than 40 hours a week. If so, then the "overpayments" would, in fact, result in a calculation showing that all such teachers were significantly "underpaid." The situation would be much clearer if teachers were not "exempt" under the overtime laws. However, the fact teachers can't sue for time and a half for all hours over 8 per day and 40 per week does not mean that these hours don't constitute unpaid overtime. Therefore, from an equity position, LAUSD -- as the party asserting overpayment -- has the burden to produce evidence to prove that its calculations are correct under the CBA. However, Lapekas believes LAUSD must also produce evidence to show that the teacher NEVER worked more than 8 hours per day or more than 40 hours per week. And again..and by the way...where is (WAS?) UTLA on this subject? Did it file a petition with PERB? Since these "overpay" lawsuits have been filed against several hundred teachers based on compensation issues that arose DURING their employment by LAUSD, where UTLA was and remains the exclusive collective bargaining representation, UTLA's flimsy and clearly erroneous position has been that these teachers targeted by LAUSD for overpayment are "former" UTLA members that they are not required to defend. As they will undoubtedly find out in the not so distant future, nothing could be further from the truth...or their real legal obligation. Until that time, a still active member of UTLA might formally ask UTLA administration what they have to do to protect themselves from this kind of bad faith lawsuit after he/she retires? The answer...or lack of same...should be enlightening and speak volumes. 18 Dec
THE REAL COST OF MARGINALIZING THE "ELDERLY" - The recent victory of Donald Trump and his now almost across the board appointment of ultra-conservatives to fill key positions in his administration should be seen as no surprise. Rather, it's just the latest expression and expansion of the now longstanding laissez-faire corporate theories touted by the late University of Chicago economist Milton Friedman. These ideas as expounded and implemented through what author Naomi Klein called 'the Shock Doctrine," in her 2007 book of the same name, shows in alarming detail how Friedman and his followers with the active support of the United States government have over the last half century, throughout Latin America (Chile, Argentina, Brazil, and Bolivia) and elsewhere around the world, created the multinational corporate oligarchy that pledges alliance to only the country it can control. Simply stated: Sovereignty and the majority well being now take a back seat to ever increasing corporate profit at any cost. What is now being sought with greater and greater rapidity is the phasing out any government role in the independent performance or regulation of virtually any aspect either of the American or world economies in areas as diverse as public education or the waging of endless wars motivated almost exclusively by perceived future corporate profit. Or more simply stated: Having the third largest oil reserves in the world had more to do with going to war in Iraq in 2003 than did weapons of mass destruction. However, it has only dawned on me recently that there is something much worse than entities like multinational corporations that exclusively determine their well being by the sole measure of whether they always attain an ever increasing profit. If you think about it, such uncontrolled growth without reinvestment is actually much more akin to the definition of a cancer than a viable social entity. What is worse than, for example, targeting your most senior workers for the sole reason that you think you can replace them for a fraction of the cost, while putting the "savings" into more corporate profit, is the non-realization that with the loss of your more senior workforce goes the institutional memory that might have allowed you to know what happened the last time the economy was pushed over the edge by corporate greed. I think it was called the Great Depression.16 Dec
IT'S IMPOSSIBLE TO REFORM THE DEMOCRATIC PARTY WITHOUT ADDRESSING THE "5TH COLUMN" - In watching the Our Revolution online event tonight supporting Keith Ellison, who is being put forward to head and implement past due reform in the Democratic Party, there remained the proverbial 800 lbs gorilla on stage with Ellson in the person of AFT President Randi Weingarten. Weingarten, whose career heading AFT exemplifies the same contradictions in supposed values versus actions that now plagues the Democratic Party, while continuing to remain unaddressed even after the recent drubbing at the poles. Weingarten and AFT have systematically abandon falsely targeted teachers and cooperated with their removal on false charges and the premeditated dismemberment of public education without objection, while tacitly supporting the 1% corporate agenda of the exclusively profit motivated corporations in their incessant move to privatize public education for profit and the further dumbing of America. Tonight at the AFT sponsored event, no notice was ever taken by Sanders, Ellison, or any other person of good will, who truly wants to fix the Democratic Party, but seems inexplicably blind to the open and obvious corruption that a Randi Weingarten or other members of the traditional Democratic establishment's leadership that has continued unabated, while selling out the supposed Democratic Party's core values enumerated by Sander in his speech tonight. This has gone on for so long now that the Democrats core constituencies- those who voted for Obama twice- have now seen too many cross over in disgust and support Trump and a Republican agenda, which in reality is completely antithetical to their true interests. And none of these "reformers" wonders why, when the Weingarten answer is standing next to them? Although Bernie Sanders eloquently laid out what the Democratic Party platform has been about in the past and should be about in the future, which the vast majority of all American of either party support and which a Republican Party is completely and utterly in opposition to, it seems that none of those wanting to reform the Democratic Party seemed willing to address the blatant 5th column within the Democratic Party represented at the event by the likes of a Randi Weingarten, whose career, like much of the existing leadership of the Democrats has always put their personal self-interest ahead of either the rank and file of their union or the best interests of the average working person. This is why Republicans now continue to dominate all levels of both state and federal government, not because they are liked, but rather because of the blatant hypocrisy that still continues to run the Democratic Party, while habitually only offering its voters the lesser of two evils. As much as I like Sanders, he never seems to address or reconcile this contradiction, even though the personification illustrating why Democrats lose and Republicans win was standing right behind him on the stage in the person of AFT President Randi Weingarten. It is my contention that Democrats will continue to lose the vast majority of state and federal elections, because those like Randi Weingarten remain in Democratic Party leadership and in the pocket of the 1% they really serve in their move to privatize public education and any other area of our economy where the profit of the 1% continues to take precedence over the will and needs of the majority of people in this country. 14 Dec
THE 10 IDEAS TO FIX PUBLIC EDUCATION TRUMP APPOINTEE BETSY DEVOS IS LEAST LIKELY TO PROPOSE - After rereading Naomi Klein's The Shock Doctrine in the context of Trump's recent election as president with incessant ultra-conservative appointees, it occurred to me that the social and economic subversion and conspiracy she so aptly describes in the context of Chile,  Argentina, Brazil, and elsewhere under the guise of free market laissez faire Milton Friedman Chicago School economics is something that Americans are now going to experience first hand. What follows is a more rational and far less expensive alternative vision of what could be done to fix public education, short of bringing American imperialism home to America, while privatizing the root organization necessary for a true democracy of, by, and for the people. 1. The failure of discipline is a major factor in schools inability to educate students. Since schools are financed by the state based on Average Daily Attendance, school administrators are loathe to suspend students who not only disrupt their own education, but also the education of other students who want an education. In an article published January 14, 2007 in the Los Angeles Times, a Title I school in Compton, California, that was half Black and half Latino and had only recently been taken over by the state for malfeasance, was able to achieved 868 API scores- comparable to Beverly Hills and San Marino- because the principal did not hesitate to suspend 100 from the 467 student body until they could comport themselves in a manner that would allow them to be educated. It is amazing how quickly parents can get their children to behave when they can no longer dump them on the schools. 2. The major difference between the successful private schools that now accommodate 92% of the Whites that have abandon public education in Los Angeles and the Los Angeles Unified School District is the teacher to student ratio. Private schools have 15 to 20 students per class, while LAUSD permits 43 to 1. If a teacher has five classes with 43 students in each, it is unrealistic to think that rigorous writing assignments will be given by teachers who cannot reasonably be expected to grade 215 essays a week- 75 is doable, especially if it is done in the context of a school without the aforementioned behavior issues. It is highly suspicious that money for the improvement of public schools goes everywhere except to lessening teacher to student ratios. 3. The total capacity of all colleges and universities in the United States is 30% of high school graduates. End the disingenuous ed speak rhetoric that talks about all students going to college- college is not the only way to attain the skills necessary to become a productive and successful member of this society. The industrial arts program in LAUSD has been systematically closed down over that last 30 years based on the false assumption that the cost of retrofitting these shops to the exigencies of modern technology would be prohibitively expensive. In countries like Germany and France, these costs of retrofitting and supplying educational materials were borne by the private sector, which was happy to do so in exchange for a constant supply of well qualified graduates from the high schools. This has allowed these corporations to avoid the necessity of taking mechanics off the shop floor to be retrained at even greater expense to these companies. In Europe and elsewhere, this has been a win/win scenario for both education and business, while avoiding the ineffective and without rigor alternative of corrupt private and for profit "universities" like University of Phoenix and Trump University. Furthermore, students who subsequently decide that they want a higher academic education can more easily achieve this goal when they already have a skilled profession to pay for the presently daunting costs of increased tuition and other costs related to attaining such an education. Companies like Home Depot are unable to keep many young employees beyond their 90 day trial period because they have not been educated by the schools to have the requisite basic skill level and social responsibility necessary to show up and work to even a minimal level of competency. What is the cost to American business in competitive terms to be unable to find adequately educated employees? It takes approximately six months training to pass the state certification examination to be a welder. There is a critical shortage of welders in this country, where the starting average salary for this trade is $40,000 a year. Are people gainfully employed as likely to join gangs or join the 2 million inmates that presently occupy our jails and prisons? 4. The vast majority of our student population is condemned to failure before they even arrive at the school because they come from families that do not have the ability to physically and intellectually nurture their children in a manner that would lead to their ultimate success in school. We clearly know that things as mundane as diet, being read to, and having parents that have the time to talk to and parent their children create the stimulation and structure necessary to create the preconditions for ultimate success in school. What part of 80% of human brain development taking place before the age of 3 do those like those making education policy in this country not get? If the schools in Los Angeles and elsewhere became the Zocolo or cultural town square of the community, many of these disadvantages that presently plague our students and lead to their ultimate failure in school could be avoided. And attraction to gangs that presently fill this education vacuum would become a non-issue. At the turn of the last century, the settlement house provided the early acculturation necessary to assure success in school of a whole generation of European immigrants. Why can't we do that now? Not only should local schools have a preschool program, but there should be an outreach into the community to identify women who are pregnant and assure that they are educated and aided in matters such as healthy diet, which would allow their children to reach their genetic potential- a prerequisite for ultimate academic success. Community markets run at the school on a weekly basis could significantly bring down the cost of these healthy foods to the community. Every successful industry has to be concerned about the quality of the raw materials used to produce its products, why should education be any different if it wants to be successful in educating its people? 5. Schools are traditionally underutilized. One factor leading to the academic failure of predominantly minority children and the related rise in gangs is the inability of lower socio-economic parents to parent their children because they are too busy working two or three low paying jobs in order to make enough money to support their families. Thus, their children are being raised by the streets and the gangs that vie with our society for the hearts and minds of these kids. If night school programs were offered at these schools, it would have several advantages: It would educate these parents into a skilled labor force that would allow them to meet the financial needs of their families, while still being the necessary and irreplaceable presence in their  children's socialization and accountability. Adolescents by their nature push for limits and absentee parents cannot supply these necessary limits. Furthermore, parents who are being educated would have the ability to help their language learning children, something they are presently unable to do in the vast majority of cases. If a kid on the Westside of Los Angeles doesn't get it, the parents get the child a tutor or tutor the kid themselves, but what happens when the parents don't have these options because their own educational level is so low? Or stated another way: I'm presently teaching the children of the students we pushed through school without even a basic education. It's time to break this cycle, which will not get better by saying, "I am somebody." 6. LAUSD and the state make a lot of noise about teaching to state standards. However, there is a fundamental flaw to their reasoning- it assumes that the student in a certain grade has already mastered the underlying standards for their previous years of education. This is clearly not the case. The vast majority of students not only are unable to achieve the standards for their age-determined grade level, but they are also deficient in the standards for many of the prior years. A more rational approach to initial placement would be to access the student's actual ability and then place them according to that ability, rather then place them by age into classes that frustrate and cause them to turn off at an early age and disrupt the educational process for other students. In France at the Lycee International, no attempt is made to teach substantive courses to foreign language speakers until they have had at least one year and sometimes two years of intensive language instruction. Students who have language mastery then have little difficulty catching up with their peer group. But an educational system that continues to socially promote students through grade after grade of standards that they have not mastered should not be surprised when these students either drop out of school or fail to achieve even a minimum level of the education they need to be successful citizens. The vast majority of LAUSD students only have basic interpersonal language skills (BICS), which can be acquired in as little as six month in the United States. The more rigorous cognitive academic language production (CALP), which is necessary for higher education, is nowhere to be found in the majority of LAUSD schools, where students truly believe that a high school diploma is attained by copying a certain number of answers out of a book without understanding what they are writing. This phenomenon has gone on so long that a motivated and conscientious high school teacher is often greeted by hostility, if he dares to try and illicit CALP from students who have been "educated" to be antipathetic to education. This is a result of a misguided effort not to frustrate students. Many teachers teach a curriculum that has no rigor, because they realize that the majority of their students have been pushed through the system without the basic skills necessary to do the every more rigorous work demanded by subsequent grade levels. As the disparity between learned ability and grade level demand becomes greater- usually in middle school- the behavior issues become more pronounced. Politicians and school administrators shouldn't be surprised that these students fail the California High School Exit Exam (CAHSEE). Their solution? Stop giving the CAHSEE. The State of California cannot come in with a "red team" audit of a failing school that is leading to a state takeover of that school and then allow the principal of the same school  to intimidate the teachers for failing the students and not giving them a "passing" grade. 7. All testing should be limited to one assessment to determine appropriate class placement- irrespective of age- and a secondary assessment at the end of every grade level to determine if the student has achieved the minimum mastery of subjects necessary to allow promotion to the next grade- if this was instituted, you might actually have a chance of meaningfully teaching to standards. Supplemental expenditure for early identification and tutoring of students that are having trouble should receive a top prior along with giving these underachieving students the best teachers- it's just cheaper in the long run. Presently, the newest teachers are given the most difficult classes, while the older more seasoned teachers teach the easier or more intellectually stimulating classes- this has lead to an average 50% turnover of new teachers in LAUSD within 5 years. Although the District says it wants highly qualified teachers, the reality is that they are not unhappy to see a higher priced teacher quit for a less expensive first year teacher. The sophistication necessary to calculate the exorbitant cost of this constant turnover of teaching staff, which dwarfs the savings derived by paying lower salaried new teachers, is beyond the understanding of most administrators whose own education was to teach school and not to run a multi-billion dollar public corporation. The present over-testing regiment is degrading and disheartening to both students and teachers and wastes precious teaching time. 8. Bilingual education is a must in the global village. The only unequivocal way to tell a Latino student that his culture has value is to also teach his language. It is not going to hurt the rest of us and besides, they are the majority of the population in this state. Furthermore, all research shows that students who are literate in any language have a much easier time in transitioning to English. Furthermore, recent studies have shown that bilinguals process all language in all subjects much quicker than monolingual students. One of the greatest lies being perpetrated in education is that it would require more money to solve the education crisis. On the contrary, it would actually cost significantly less than the price of incarcerating over 2 million people in a country that, in reality, is in dire need of an educated workforce to build the infrastructure that would make Los Angeles a better place to live for all of us. 9. The budget of the Los Angeles Unified School District, when all expenditures are taken into consideration, is larger than the budget of the City of Los Angeles. One of the only justifiable reasons to allow such behemoth innercity school districts to exist is to take advantage of the economics of scale in purchasing educational materials and the construction and maintenance of schools. The reality of LAUSD is quite different. There is a list of agreed upon vendors from which LAUSD buys, even though any individual could walk into a local store and get a  better price than the school district. When we tried to purchase computers at our school for $300 less per computer, we were told that we could only buy from the higher priced district approved vendors. Rather than "manhattanize" the construction of future schools on existing school sites that are presumptively more likely to be clear of toxic waste as was suggested by then LAUSD CEO Howard Miller almost 15 years ago, LAUSD has chosen the far more expensive alternative of using eminent domain to build new schools on often contaminated sites without examining far less expensive alternatives. It is also worth noting that something as simple as staggering the starting and ending times of school to ensure greater utilization of the existing schools might obviate the necessity of building more schools, while accommodating many students who have to work nights to help support their families. These students would be much more likely to be engaged and stay in school, if they were allowed to get adequate sleep before coming to school later on during the day. 10. The exclusive source of school administrators in LAUSD with few exceptions are school teachers who have a totally different skill set than what is necessary to effectively run a multi-billion dollar business like LAUSD. The reason that this system exists is that becoming an administrator is upward mobility for teachers that are burned out by the intolerable present conditions in our schools. Furthermore, as the cost of home ownership continues to rise in Los Angeles, we cannot hope to attract less transient teachers unless we adequately compensate them, so that they will stay with the teaching profession. Administration of schools should become a totally separate educational track that prospective administrators should commit to and be educated for during college, rather than the sole source of upward mobility for the teaching profession. 5 Dec
WANT TO STOP MANSIONIZATION IN THE MIRACLE MILE? - There's a war going on in the Miracle Mile of Los Angeles between residents. They are divided over what is the best and most effective way to stop unrestricted out of scale growth that up until now has had little or no concern for maintaining the intrinsic character and charm of this Miracle Mile neighborhood. While there is general agreement among all Miracle Mile residents that some form of residential development restriction must be put into place immediately, that's where any consensus among competing Miracle Mile residents seems to end. One faction, organized around those who have been active in the Miracle Mile Residential Association (MMRA) and its President James O'Sullivan and MMRA member Ken Hixon seem to have made up their minds that a Historical Preservation Overlay Zone (HPOZ) is the only solution capable of controlling development and protecting the historical character of Miracle Mile residences... even though this is clearly not the case. What goes unmentioned among these HPOZ supporters is that an HPOZ might be good for those Miracle Mile residents living in rent stabilized apartments, but that it is over-kill for single-family R1 residents that  would see their costs for even the most modest maintenance and remodeling- consistent with the restrictions put in place by an HPOZ- double or even triple, when these R1 residence have to conform to even the most modest inherently protracted requirements of the proposed HPOZ. In effect, it is as if the R1 residents through the imposition of a costly HPOZ are going to be subsidizing the continuance of rent stabilized multi-unit dwellings within the borders of the proposed HPOZ. While there are clearly less draconian measures than an HPOZ, like an R1 Variation Zone that has 16 different neighborhood model designs possible to protect the character of different types of neighborhoods without becoming an impossible and prohibitively expensive burden on Miracle Mile residents, the MMRA leadership has up until now been against even considering them. If you wonder why, it's because it is thought that none of the sixteen R1 Variations possible for implementation in the Miracle Mile do anything to protect residents in multi-unit smaller rent stablized apartments that are in abundance in the Miracle Mile. Therefore, because of the adversarial interests between those Miracle Mile residents living in single family R1 houses and those living in mostly small presently multi-occupant rent stabilized buildings, it appears that O'Sullivan, Ken Hixon, and others heading the MMRA leadership have not been forthcoming with all the facts necessary for ALL the residents of Miracle Mile to make an informed decision as to what would be best for EVERYBODY in their neighborhood. In fact, they seem to have actually manipulated the HPOZ process by alleging "facts" in supporting a proposed HPOZ that are completely and easily verifiable as to being untrue. One such huge distortion that the folks pushing HPOZ are making can be seen in the video here linked to that claims "80% of the 1351 structures in the proposed Miracle Mile HPOZ are denominated "contributors" to the proposed HPOZ zone and only 20% are not." And yet, when you look at the map and identify the specific residences at 5:06 minutes into this YouTube video link you clearly see that they have included in this 80% figure "altered contributor"(yellow) residences that already have radical deviations from their uniquely historical initial architecture that is supposed to have been a substantial prerequisite for an HPOZ. Why is that? In fact, the vast majority of the supposed "contributor"(green) structures that they are basing their claim for HPOZ status on are actually significantly "altered contributor"(yellow) denominated properties. Furthermore, when you aggregate those structures denominated "altered contributor"(yellow) and those denominated "non-contributor"(black) the claim of commonality for an HPOZ goes completely out the window, when one realizes that it is the "contributor" structures that are in fact in the clear and absolute minority. Now here's a radical notion. Even at this late date when the HPOZ train seems to have already left the station, might it not still be possible for all residents of the Miracle Mile to come together in harmony as a community and propose a compromise alternative plan that attempts to reconcile the reasonable needs of both the single R1 residents with those of the rent stabilized multi-units residents? Isn't it still possible to come up with a plan that addresses all of their concerns, while incorporating all residents common concern of maintaining the quality and scale of this dare I say charming community? Even historic preservationist Ken Bernstein of the Office of Historic Resources, Department of City Planning seem to agree in what he has said- if not in what he has done- that an HPOZ is not appropriate in certain circumstances that seem to pretty closely approximate the Miracle Mile reality: "An HPOZ is also not the right tool for every neighborhood. Sometimes, neighborhoods become interested in achieving HPOZ status largely to stop out-of-scale new development. An HPOZ should not be seen as an "anti-mansionization" tool: other zoning tools may better shape the scale and character of new construction. An HPOZ is best utilized when a neighborhood has a cohesive historic character and community members have reached a consensus that they wish to preserve those historic  architectural features." Maybe you could give City Councilman David Ryu and Ken Bernstein a call to express your concerns and the fact that you vote: City Councilman David Ryu Los Angeles City Hall200 N. Spring Street, Room 425Los Angeles, CA 90012Phone: (213) 473-7004 david.ryu@lacity.org Ken Bernstein Office of Historic Resources, Department of City Planning 200 N. Spring Street, Room 559, Los Angeles, CA 90012Phone (213) 978-1200 Fax (213) 978-0017 16 Nov
LAUSD'S FALSE OVERPAYMENT LITIGATION AGAINST TEACHERS AND WHAT THEY CAN DO ABOUT IT - Whether it is fabricating charges against more expensive high seniority teachers to get rid of them or subsequently adding insult to injury by making bad faith false allegations of overpayment against them, both actions are predicated on the idea that a teacher now without salary or benefits will be hard pressed to mount a legal defense. For those of you teachers and other LAUSD employees who have been so falsely charged with having been overpaid, what follows is a way you can finally and reasonably defend yourself. Linked below is an "Answer" to LAUSD's false allegations of overpayment that you can use along with the blank forms and content necessary to defend yourself in court. Copy the information from my attached sample "Answer" below into the blank template provided, if the facts are the same in your case. Then cut and paste the following as indicated into the blank forms provided at the bottom. Be sure to change case number, if yours is not the same. Then all you have to do is file the documents with the court and pay the filing fees. (Form PLD-C-010 two pages) Affirmative Defenses 1. This Court lacks jurisdiction because the causes of action alleged in the First Amended Complaint arise under a Collective Bargaining Agreement (CBA) entered into by LAUSD and United Teachers Los Angeles (UTLA), the alleged overpayment of wages is a subject of mandatory collective bargaining under the Educational Employee Relations Act (EERA), and the interpretation of issues arising under a CBA are subject to the exclusive initial jurisdiction of the Public Employee Relations Board (PERB). Therefore, LAUSD has not satisfied the procedural prerequisite that it exhausted its administrative remedies under the EERA. 2. Neither the First Amended Complaint nor any cause of action alleged therein alleges facts sufficient to state a cause of action against this answering defendant. 3. The damages, if any, were the result of the negligence, fault, incompetence, carelessness, want of due care, and defective design and operation of the LAUSD payroll system by LAUSD, its agents and employees, and others presently unknown. 4. The First Amended Complaint and each cause of action therein are barred by the applicable statutes of of limitations. 5. This answering defendant has fully performed all acts required under the CBA. See Attachment 4 Plaintiff LAUSD has failed to join UTLA as a necessary and indispensable party. On information and belief UTLA has failed and refused to file an unfair labor practice against LAUSD based on LAUSD's violation of the terms of compensation under CBA Article IX. Since LAUSD has filed identical complaints against UTLA members every year since 2009 and UTLA has failed to represent its members as the exclusive bargaining agent or otherwise enforce the terms of the CBA, complete resolution of all claims and disputes requires the joinder of UTLA. Attorney fees under CCP sec. 1021.5: Should Defendant prevail and obtain dismissal of this lawsuit on behalf of all defendants, counsel will have obtained a benefit for a large number of persons while enforcing an important public policy. (Form MC-025) AFFIRMATIVE DEFENSES, CONTINUED. 6. LAUSD's false representations concerning the terms and conditions of defendant's compensation under the CBA bar any claims it may have to equitable relief under the unclean hands doctrine. 7. LAUSD's false representations concerning the terms and condition of defendant's compensation under the CBA is a breach of the duty of good faith and fair dealing implied in every contract. 8. LAUSD's FAC is virtually identical to complaints LAUSD has filed every year since 2008 alleging that it has made the identical mistakes every year and has therefore waived the right to claim "mistake" as a ground for relief. 9. The compensation payable under the CBA is an annual salary paid monthly without regard to the number of days or hours worked in the relevant pay period. Therefore, LAUSD has falsely represented the terms and conditions of a written contract (i.d., the CBA) to the Court and violated the duties of candor and honesty owed to this Court. 10. LAUSD has failed to mitigate any damages it may have sustained caused by LAUSD's continuing to utilize a payroll system that fails to acknowledge the terms and conditions negotiated as the result of arms length bargaining between LAUSD and UTLA compensation agreement. 11. LAUSD lacks standing to enforce the terms of the CBA unless and until it exhausts its administrative remedies. 12. LAUSD has failed and refused to provide any documentary evidence supporting its allegations that LAUSD's records contain facts showing that defendant was overpaid. 13. LAUSD's failure to provide defendant with documentary evidence substantiating its allegations prior to filing this lawsuit implies that it has no such evidence and will be unable to establish the elements of its causes of action as a matter of law or, if it has concealed such evidence, that it is estopped to offer the evidence at trial. 14. LAUSD has attempted to commit a fraud on the Court by knowingly concealing from the Court that LAUSD's purported calculations of "overpayments" based on a six-hour day is a misrepresentation of the agreement that teachers work "no fewer than eight hours" as set forth in Article IX of the CBA. Article IX provides in relevant part as follows: 1.0 General Workday Provisions: It is agreed that the professional workday of a full-time regular employee requires no fewer than eight hours of on-site and off-site work, and that the varying nature of professional duties does not lend itself to a total maximum daily work time of definite or uniform length. Answer Lenny.pdf answer add.pdf answer contract blank PLD-C-010.pdf blank attachment four mc025.pdf In addition, you will have to fill out "Summons" forms and "Proof of Service" forms that you can download from the Internet summons on XC.pdf 16-08-12 Amd POS.pdf File the answer. You send it to the court house and attach a "proof of service" which you can download from the internet. You are required to serve the attorney for the district. Filing the answer triggers the requirement to pay the fiilng fee for the "first paper", which will depend on the amount of the claim if in "limited jurisdiction". In my case, we filed the opposition with the intention that it be filed as an "unlimited' cross-complaint. This results in a higher filing fee but also opens up a few doors to better discovery. If you file the cross-complaint you also file a summons on cross-complaint. (Another form you can download from the internet.) If you forget to file it, you can always file it later; however, this is the "official" document that gives the court jurisdiction over the cross-defendants. You can alway contact my lawyer through this site if you have any other questions. 3 Nov
PROPOSITION M- THE WRONG SOLUTION? - (Mensaje se repite en Español) While improving transportation is right up there with motherhood and apple pie, when it comes to things that most people are for, I am not so sure that Proposition M is the best solution for resolving what is the historically premeditated traffic nightmare that Los Angeles County has purposefully been allowed to become. In the early 1950s you could stand on the corner of 3rd and Fairfax in from of the Farmers' Market and catch a Red Car down Fairfax to Venice and then out to Culver City and the beach. Or you could go in the other direction up to Santa Monica Blvd and onto a right of way that went through Beverly Hills between big and little Santa Monica Blvds. to the Westside. These and other Pacific Electric Red Car routes throughout L.A. County were the "largest electric railway system in the world in the 1920s." And yet in a very short time with a General Motors eye on selling more cars in Southern California Pacific Electric's parts supply companies were bought up and this magnificent transit system was forced out of business. But this destruction of a public utility facilitated by corrupt a governmental scam didn't end with the destruction of the Red Car. Over the years as L.A. continued to grow, no consideration was ever given to the already existing rights of way that the Red Car had left and that would be necessary to ultimately address the traffic nightmare that L.A. was rapidly being allowed to become for the sole shortsighted motive of financial greed that put making a buck ahead of making a liveable megalopolis. Old super-wide Venice and Santa Monica Blvds were developed in a manner where their more than adequate median rights of way were destroyed for either building more car lanes or buildings like the Venice Library or an expanded Beverly Hills City Hall, which now assures that these streets can never be used for a rapid transit systems of the future. As late as the early 1970s, Southern Pacific Railroad used to run its trains down Santa Monica Blvd. in front of my apartment near Westwood Blvd. as a condition for maintaining their right-of-way. Any thinking governmental transit authority could have seen this right-of-way would be necessary in the not too distant future to accommodate L.A.'s neverending population expansion. Any of you remember the train trestle that used to cross Beverly Glen in the not too distant past? So now what Proposition M is asking of the voter is to raise the sales tax by .5% to fund yet another not-completely-thought-through uber expensive rapid transit project(s) that is sure to run billions over budget, with the belief that people who have historically always done the wrong thing are now somehow going to do what's right- do leopards really change their spots? A few different ideas for dealing with an L.A. that is the logical traffic nightmare result of prior bad 20th century choices might be to consider some of the following alternatives before we leap into Proposition M: - Driverless car technology is moving so fast that it might be with us before the 5 years it will take to get the Purple subway line extension to La Brea or the 20 years it will take to get it to Westwood and beyond. Would not investment in driverless car technology be more economically reasonable and less disruptive that what might be a significantly more expensive and outmoded subway or bus expansion? Right now we have already seen disruptive years of tearing up of Wilshire for extending the Purple Line, I find that I can drive to Beverly Hills faster at rush hour on my bicycle than in a car or bus. - The type of subway and bus system Proposition M seeks to build works in a city the size of Paris, where there is no place in the city that you are more than 5 minutes away from a Metro, RER, bus or SNCF (train) station. But Paris is a city you can walk across in about an hour. Where will an hour's walk get you from your house? - Just before the Exposition line opened to Santa Monica it was  mentioned that the temporary reduction of car traffic would last only about 3 years, before the levels of traffic would again be at the pre-Expo line levels. With Internet and computer technology, how many of us really have to get into a car each day and go to work. How many people could do their jobs from home. And what if Los Angeles was broken up into a multitude of municipalities like Santa Monica, Culver City, Glendale, Burbank and others where more city, country, state, and federal government services were offered at a local level. In the final analysis, Proposition M will fail because it completely ignores the bad decisions our leaders made in the past at the behest of rich business interests- something that they are arguably doing again with Proposition M. En Español Mientras que mejorar el transporte es justo allí con la maternidad y el pastel de manzana, cuando se trata de cosas que la mayoría de la gente está para, no estoy tan seguro de que la Proposición M es la mejor solución para resolver lo que es la pesadilla tráfico históricamente premeditado que Los Angeles County ha A propósito se ha permitido convertirse en. A principios de los años 50 usted podría colocarse en la esquina de 3ro y Fairfax adentro de del mercado de los granjeros y coger un coche rojo abajo Fairfax a Venecia y después hacia fuera a Culver City ya la playa. O usted podría ir en la otra dirección hasta Santa Mónica Blvd y en un derecho de paso que pasó por Beverly Hills entre grandes y pequeños Santa Monica Blvds. Al Westside. Éstas y otras rutas del Pacífico Rojo Eléctrico Red en todo el condado de L.A. fueron el "sistema ferroviario eléctrico más grande del mundo en la década de 1920". Y sin embargo, en un tiempo muy corto con un ojo de General Motors en la venta de más coches en el sur de California, las compañías de suministro de partes de Pacific Electric fueron compradas y este magnífico sistema de tránsito se vio forzado a abandonar el negocio. Pero esta destrucción de una utilidad pública facilitada por una estafa gubernamental corrupta no terminó con la destrucción del Coche Rojo. A lo largo de los años como LA siguió creciendo, no se consideró nunca los derechos de vía ya existentes que el coche rojo había dejado y que sería necesario para hacer frente en última instancia a la pesadilla de tráfico que LA se estaba permitiendo rápidamente para convertirse en el único miope Motivo de codicia financiera que ponen a hacer un dólar por delante de hacer una megalópolis habitable. Los viejos súper anchos Venecia y Santa Mónica Blvds fueron desarrollados de una manera donde sus derechos de vía medianos más que adecuados fueron destruidos para construir más carriles o edificios como la Biblioteca de Venecia o un Ampliado Ayuntamiento de Beverly Hills, que ahora asegura que estos Las calles nunca se pueden utilizar para un sistema de tránsito rápido del futuro. Tan tarde como los 1970s tempranos, el ferrocarril pacífico meridional utilizó para funcionar sus trenes abajo Santa Mónica Blvd. En frente de mi apartamento cerca de Westwood Blvd. Como condición para mantener su derecho de paso. Cualquier autoridad gubernamental de tránsito pensante podría haber visto este derecho de paso sería necesario en un futuro no muy lejano para dar cabida a la expansión intermitente de la población de L.A. ¿Alguno de ustedes recuerda el caballete del tren que solía atravesar Beverly Glen en un pasado no muy lejano? Así que ahora lo que la Proposición M está pidiendo al votante es elevar el impuesto sobre las ventas en un .5% para financiar otro proyecto de tránsito rápido caro que no es totalmente pensado a través de uber caro que está seguro de ejecutar miles de millones sobre el presupuesto, La creencia de que las personas que históricamente siempre han hecho lo malo ahora están de alguna manera va a hacer lo que es correcto, ¿los leopardos realmente cambian sus puntos? Unas pocas ideas diferentes para tratar con un L.A. que es el resultado de la pesadilla de tráfico lógico de anteriores malas elecciones del siglo XX podría ser considerar algunas de las siguientes alternativas antes de saltar a la Proposición M: - La tecnología de vehículos sin conductor se está moviendo tan rápido que podría estar con nosotros antes de los 5 años que tomará para obtener la extensión de la línea de metro Purple a La Brea o los 20 años que se tardará en llegar a Westwood y más allá. ¿No sería la inversión en tecnología de automóviles sin conductor ser más económicamente razonable y menos perturbadora que lo que podría ser una expansión de metro o autobús mucho más costosa y anticuada? Ahora mismo ya hemos visto años de desgarramiento de Wilshire por extender la Línea Púrpura, me parece que puedo conducir a Beverly Hills más rápido en horas punta en mi bicicleta que en un coche o autobús. - El tipo de metro y sistema de autobuses La Proposición M busca construir obras en una ciudad del tamaño de París, donde no hay lugar en la ciudad que se encuentre a más de 5 minutos de un metro, RER, autobús o SNCF (tren) estación. Pero París es una ciudad en la que puedes caminar en aproximadamente una hora. ¿A dónde te llevará una hora de tu casa? - Justo antes de que la línea de exposición se abriera a Santa Mónica, se mencionó que la reducción temporal del tráfico de coches duraría sólo unos 3 años, antes de que los niveles de tráfico vuelvan a estar en los niveles previos a la Expo. Con Internet y la tecnología informática, cuántos de nosotros realmente tenemos que entrar en un coche cada día e ir a trabajar. Cuántas personas podrían hacer su trabajo desde casa. ¿Y si Los Ángeles se dividiera en una multitud de municipios como Santa Mónica, Culver City, Glendale, Burbank y otros donde más ciudad, país, estado y servicios del gobierno federal se ofrecieron a nivel local. En última instancia, la Proposición M fracasará porque ignora por completo las malas decisiones que nuestros líderes hicieron en el pasado a instancias de los ricos intereses empresariales, algo que seguramente están haciendo de nuevo con la Proposición M. 2 Nov
Give Mother Nature a hand this holiday season. She needs the help. - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} There are lots of ways to give a Christmas present to Mother Nature.You can do your part for climate change, since it’s increasingly evident that Uncle Sam won’t be going there.You can help clean up a beach. Here are the links for Surfrider’s beach cleanups. Oahu. Maui. Kauai. And you can help the birds.The state doesn’t adequately fund its natural resource commitments through the state Department of Land and Natural Resources, but you can find lots of ways to help out.One of them supports the endangered O`ahu `elepaio. The Hawaiian `elepaio are not  the most colorful birds in the forest, with the reds, greens, yellows and fancy crests of many of the honeycreepers and honeyeaters. But it’s among the friendliest. This perky old world flycatcher or monarch flycatcher will land right near you on a forest trail, follow you along, with its tail jauntily sticking up in the air.Dr. Eric VanderWerf, of Pacific Rim Conservation, has launched a crowdfunded effort to trap rats in `elepaio habitat. Rats are among the most significant threats to Hawaiian forest birds. There are videos of them creeping along branches to bird nests, and going after eggs and chicks. Rats are not native to the Hawaiian environment, and controlling them is key to protecting what’s left of native birdlife in the Islands.If you have some dollars to shell out for Mother Nature, consider helping the `elepaio. Here’s the site with the information on how to do it. I’ve worked with VanderWerf as a reporter, and he’s one of the good guys in bird conservation. (Not many bad guys in that field, actually.)© Jan TenBruggencate 201623 Dec
Fake News: What's free speech and what's crying "fire" in a crowded theater - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes;  mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in;   mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} There’s free speech, and there’s yelling “Fire!” in a crowded theater.“Fake news” and some of the more awful conspiracy theories are more like the latter—and ought to be treated that way—as crimes.We have a tradition in our country of letting people speak without fear of being censored. The First Amendment to the Constitution guarantees it. But freedom of speech in the United States is not absolute. You can say most anything you like, but it shouldn’t cause harm. Abraham Lincoln gets credit for this line: “My right to swing my fist ends where your nose begins.”If one person claims a presidential candidate is running a child sex ring out of a pizza joint, and then someone else shows up with an assault weapon to clean it up—then maybe the person who spread that filth needs some jail time.If you provide a vehicle for that kind of nastiness—like a website or a radio station that provides a voice for dangerous conspiracy theories—isn’t that handing a megaphone to the guy yelling “Fire?”Fake news resulting in aggressive action may not be protected under the First Amendment. There is solid legal footing for the idea that these are “fighting words,” which do not qualify as privileged speech.“Fighting words” are described by the U.S. Supreme Court in a 1942 case as “those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.” The case is Chaplinsky vs New Hampshire. It seems that showing up at a pizza stand with a rifle at the ready, or chopping down a Hawai`i Island or O`ahu papaya farmer’s crop, or vandalizing a historic irrigation system—those may qualify as breaches of the peace. Shouldn’t those who incite that kind of behavior be held liable? The First Amendment Center notes that it’s a fine line. “The lower courts have had a difficult time determining whether certain epithets constitute ‘fighting words.’ At the very least, they have reached maddeningly inconsistent results,” it writes.The Supreme Court has given citizens wide leeway to use profane and abusive language, but has been less clear when the language is provocative. Still, the standard was established nearly a century ago, when Justice Oliver Wendell Holmes issued the 1919 unanimous Supreme Court opinion about yelling “Fire.”“The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic. It does not even protect a man from an injunction against uttering words that may have all the effect of force,” Holmes wrote.Holmes admitted that it’s not an easy call. One issue is whether speech that promotes physical harm to people is sufficient, or whether calls for damage to property are also covered.“The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent. It is a question of proximity and degree,” Holmes wrote.I’m not an attorney. I’m an old journalist with a lifelong history of supporting the First Amendment. But in these troubled times, I’m forced to modify my support for unfettered free speech.Some “fake news”-- telling outright lies that cause people to act in illegal ways-- that may meet the standard for unprotected speech.© Jan TenBruggencate 201622 Dec
Does the Web make you wise? Or the opposite? - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} Does using the web make you stupid?There’s some evidence that it does, but how does that work? Why doesn’t having all the world’s information at your fingertips make you, like, the smartest person in the world?There is a classic Hawaiian educational tradition that may make some sense of this. Nana ka maka, hana ka lima. It’s the dictate that children keep their mouths shut, and learning by watching, and figuring out how things work and how things are done by using their heads, their senses, their curiosity.Instead of just asking and being told.With the Internet, you don’t have to figure stuff out. You just look it up. And so often, what you look up is wrong, but since you’ve lost the skill of critical thinking, you don’t recognize that the web is lying to you. Worse yet, the web may be giving you exactly what you asked for—but you’re unaware that you’ve asked the wrong question.A Michigan State study recently found that the more kids use the net, the worse they do in educational testing. The paper on the study is entitled, “Logged in and zoned out.” It is to be published in the journal Psychological Science. Here’s the university’s article on the research. “The detrimental relationship associated with non-academic internet use raises questions about the policy of encouraging students to bring their laptops to class when they are unnecessary for class use,” said Michigan State psychology professor Susan Ravizza, the lead author.Of course, if you spend a lot of time on the Internet, you already knew this, right? Because the Web is full of news on how dumb the Web makes you.Psychology Today four years ago carried a piece called The Internet makes you stupid and shallow. Author Ravi Chandra writes: “A tech-filled life means that we will have to be more careful choosers of our own mental and emotional destinies. Or else we’ll sell our souls to the search engine store.”“As the internet trains our brains to be distractible, we are rewiring our synapses and losing capacity for depth,” he writes. He references a 2011 Pulitzer-winning book, The Shallow: What the Internet Is Doing To Our Brains, by Nicholas Carr. Of course, not everyone agrees with that proposition. A Pew Research Center study found that most scientists (All of them internet users. I’m just sayin’.) say Carr was wrong. One theme in this review of experts is that we’ll be stupider in some ways, but smarter in others. “It’s a mistake to treat intelligence as an undifferentiated whole. No doubt we will become worse at doing some things (‘more stupid’) requiring rote memory of information that is now available through Google. But with this capacity freed, we may (and probably will) be capable of more advanced integration and evaluation of information (‘more intelligent’),” said Stephen Downes, of Canada’s National Research Council, cited in the Pew report.So, if the Net is making us stupid in some ways and smart in others, what kinds of stupidity should we worry about? One is comprehension. A pair of Australian researchers, Val Hooper and  Channa Herath, say your memory goes to hell. Their article is Is Google Making Us Stupid? The Impact of the Internet on Reading Behaviour.“In general, online reading has had a negative impact on people’s cognition. Concentration, comprehension, absorption and recall rates were all much lower online than offline,” they wrote.UCLA psychology professor Patricia Greenfield said some skills are increased—spatial skills are improved among video game players—to the point that laparoscopic surgeons who are good at video games are better at doing surgery than those who aren’t so good at video games. "The best video game players made 47 percent fewer errors and performed 39 percent faster in laparoscopic tasks than the worst video game players," Greenfield said.But she cautions that a lot of other skills are lost in the process at gaining in spatial skill. The loss of time for reflection, analysis and imagination—all things gained by reading—leads to a loss in the capacity to reflect, analyze and imagine.So your doctor will be really good at doing surgery, but not all that good at deciding whether you actually need surgery.There’s the old line, to a guy with only a hammer, every problem looks like a nail.Going back to the Hawaiian tradition of learning, another Hawaiian saying is “‘U`uku ka hana, `u`uku ka loa`a.” It means that if you only put in a little effort, you only get a small result.Sometimes folks, seeking a quick response, don’t take the time to ask the question properly. If you can’t frame the question, how can you expect a useful answer?A lot of people believe that agricultural chemicals with very low toxicity are actually very dangerous. How could that be? Perhaps it’s in how they ask the questions.Let’s forget most agricultural chemicals and just look at water.If you ask the Google question in a particular way (Is water toxic?) you get a whole lot of scary stuff about contaminated water in the city of Flint, about water intoxication, about toxic compounds in drinking water.If you ask another way (Is water necessary for health?) you get a very, very different set of results.If you ask a weird question (Does water contaminate groundwater?) you get answers about fracking, and contaminated groundwater and pesticides in groundwater.If you ask another question (Is water a solvent?), you may be surprised to learn it not only is, but it’s the most common solvent—often called a universal solvent. Go back to the first question, is water toxic? There are lots of caveats. Are we talking about mineral-infused water, water being used as a solvent for something else, water in what quantities and concentrations? If water is that complicated, how are you going to make sense of products that are less common? You have to work very, very hard at it, and try to remove your preconceptions from your inquiry. If you don’t realize that the results of your internet search are framed by your own limitations, perhaps you’ve been spending too much time on the internet.© Jan TenBruggencate 201618 Dec
How long will you live? Exercise plays bigger role than diet, smoking and a lot of other factors. - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} Finally some clarity.You aren’t what you eat.You are what you DO.A new study suggests you can get away with a fair number of dietary and other lapses, as long as you stay active.There are Hawai`i data that make sense of this.We’ve got our health issues in the Islands. Diet alone is an issue. Think fast food burgers, kalua pork and bento lunches with five kinds of meat. (People outside Hawai`i will question this, but I recently had a box lunch with rice, pickled vegetables and Korean barbecue ribs, barbecue chicken, teriyaki beef, Goteborg sausage and Vienna sausage.)Despite such excesses, Hawai`i has the highest life expectancy in the nation. More than 81 years on average. This isn’t news. Here’s just one citation, from the Centers for Disease Control. Hawai`i also has one of the highest rates of exercising in the nation. One Gallup survey from 2013 said 62 percent of Island residents said they exercise at least 3 times a week. And it’s not just the nice weather. We’re second highest in the nation for exercise after Vermont. And Montana and Alaska are right up there, too. All are also in the top third in longevity. So, is there a connection? Sure there is, according to these researchers from the American Heart Association and Queen’s University. They say exercise trumps diet and lots of other risk factors  toward living a long life. (Here's the Eureakalert review of the paper we're citing.) And you don’t have to be a triathlete. “Moderate levels of physical activity consistent with current recommendations may be all that is needed to derive a clinically significant benefit for habitually sedentary individuals,” said Dr. Robert Ross, of Queen’s U.---------------------------Those are the basics. Let’s get into the details.The report is called Importance of Assessing Cardiorespiratory Fitness in Clinical Practice: A Case for Fitness as a Clinical Vital Sign. It was published in the American Heart Association journal, Circulation.The report says that your level of physical fitness, referred to here as cardiorespiratory fitness (CRF), is a key to mortality. It's dead simple. Low fitness, high mortality. Even moderate fitness, lower mortality.“Mounting evidence has firmly established that low levels of cardiorespiratory fitness are associated with a high risk of cardiovascular disease, all-cause mortality, and mortality rates attributable to various cancers,” the paper says.It goes on: “A growing body of epidemiological and clinical evidence demonstrates not only that CRF is a potentially stronger predictor of mortality than established risk factors such as smoking, hypertension, high cholesterol, and type 2 diabetes mellitus, but that the addition of CRF to traditional risk factors significantly improves the reclassification of risk for adverse outcomes.”Any one study gives you a pinpoint view of a very broad subject. This is different. This particular report is not new research, but is a review of the known literature on health and mortality over the past 30 years, and an attempt to get a better handle on risk factors. And it looks at studies with thousands of participants.One study after another has confirmed the role of exercise in living longer. Here are some of the study results.A study of nearly 10,000 men: “Survival increased in subjects who improved exercise capacity.”A study with more than 3800 participants: “Fitness was a strong predictor of outcomes irrespective of weight status.”A study with more than 15,600 participants: “Moderately fit had 50% lower mortality than those with low CRF.”The upshot: “A consistent finding in these studies was that after adjustment for age and other risk factors, CRF was a strong and independent marker of risk for cardiovascular and all-cause mortality.”And as we said earlier, you don't have to go to extremes in exercise. The report looks at studies of high intensity training compared to continuous moderate intensity training, and does not find a compelling case for one over the other. Both improve cardiorespiratory fitness, it says, but there are concerns about injury and “cardiac complications in selected patients” with higher intensity workouts, it says.So, what kind of exercise should you consider? Here is the language from the report:“Exercise that involves major muscle groups (legs, arms, trunk) that is continuous and rhythmic in nature (eg, brisk walking, jogging, running cycling, swimming, rowing, cross-country skiing, climbing stairs, active dancing), in contrast to high-resistance muscle-strengthening activities that produce limited CRF benefits.”If you’re getting started, depending on your condition, it recommends building up to a regimen of three to five days a week, for 30 to 60 minutes at a time. You should start slow and easy if you’re just beginning an exercise regime, and breaking the initial sessions up into batches of at least 10 minutes is okay.If you're in poor shape, the study recommends increasing activity in coordination with your medical provider. And it wouldn't hurt to read the whole report. It's free and available as a PDF from the heart association website.  © Jan TenBruggencate 2016 5 Dec
Hawaiian climate: it's not changing the way you expect--it's getting scarier - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes;  mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:4.0pt; mso-para-margin-left:0in; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman";} Much of what you’ve heard about the Hawai`i impacts of climate change may be false. It could be worse than what you’ve heard. Example: It’s probably not going to be drier everywhere, as many have suggested in recent years. In fact, according to a new paper, it’s more likely to get more extreme everywhere—kind of like American politics. Although “Hawaii is renowned for its generally pleasant weather, anticipated climate change over the present century will likely present significant challenges for its inhabitants,” says the paper, published by the American Meteorological Society.Kevin Hamilton, of the University of Hawai`i’s International Pacific Research Center, said the best research indicates it’s likely to get wetter in wet areas, but drier in dry areas—deepening the divisions between the different zones of the Islands. IPRC is part of the university's School of Ocean and Earth Science and Technology.“We expect generally more rainfall on the windward sides and less on the leeward sides.  Combined with increased evaporation from the warmer surface this could lead to particularly dry conditions in places that are already feeling water stress, such as west central Maui,” said Hamilton, the retired director of the IPRC, in an email. Hamilton and co-authors Chunxi Zhang, Yuqing Wang and Axel Lauer just published their latest data in the Journal of Climate. It is entitled, Dynamical Downscaling of the Climate for the Hawaiian Islands. Part II: Projection for the Late Twenty-First Century. Their work also anticipates warmer weather in the Hawaiian uplands. “The surface air will warm significantly and the warming will be substantially more pronounced at high topographic elevations,” Hamilton said in an email. That has significant impacts, for example, for Hawaiian upland forest habitats. Previous research suggests that warming high mountains will increase upland mosquito populations, with direct impacts on native birds. Mosquitoes carry avian disease like avian malaria and pox.“While published research on climate-related stress has concentrated on a limited number of species, it is likely that climate change in Hawaii will threaten many species and perturb terrestrial and coastal ecosystems, with unfortunate effects on the state’s remarkable contribution to global biodiversity,” the authors wrote.Another issue: If drier areas get drier, they’ll be in greater need of irrigation to support agriculture, landscaping and other uses. That water will need to be diverted from the wetter areas. Water issues are intensely political matters in the Islands, and this suggests they’ll continue to be problematic for policy-makers.  “Available surface and groundwater resources are scarce enough that water use restrictions are common in some areas during droughts, while agricultural demands for groundwater have sparked a history of public controversy and litigation,” the authors wrote.Extreme weather events are likely to increase, Hamilton and his team wrote, like the big Manoa, O`ahu, flood of 2004, and this year’s Iao Valley flood, both of which caused massive damage costing into the tens of millions of dollars.The IPRC group is continuing to fine-tune its data, but Hamilton said its climate models, when compared with past weather conditions, are accurately representing what’s been happening. And one of the warnings from the models are that apparent trends may not reflect what will happen in the future.For example, while the models predict the drying trend in Hilo that has been seen in recent years, that may not continue. The models predict Hilo will get significantly wetter later in this century.If you’re interested in detailed analyses, here are links to the group’s previous paper and the current paper. © Jan TenBruggencate10 Nov
Monsanto and Mother Jones agree(!): NYTimes lying with GMO statistics - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes;  mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in;   mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} Are we finally sick and tired of people twisting facts to suit their agendas—even when they’re agendas we agree with?New York Times writer Danny Hakim walked into a storm of criticism, even though he was just doing what so many have done for so long. He wrote a big takedown of the GMO industry, which it is perfectly possible to do that without lying. But The Times, which published his story Sunday, did a classic smear job. It was so obvious, and so wrong, that it makes you wonder whether anyone at the New York Times is editing its science writers.Now it turns out that folks all over the map have attacked the sloppy reporting—really all over the map. From Mother Jones to Monsanto.Hakim set up a straw man—GMOs were supposed to increase crop yields more than non-GMOs.Then he cherrypicked data to slap it down.Predictably, Monsanto objected. Here is a Huffington Post piece by a Monsanto vice-president.Here are Monsanto’s data for environmentally comparable areas of Ontario, Canada, and France: “Overall, (corn) yields increased from 113 bushels per acre in 1997 to 170 bushels per acre in 2015, an increase of 51 percent. In France during the same period, the increase in yields was only about 10.5 percent.”Hakim missed that, but gratuitously threw in some “confirming” statistics. “Herbicide use is coming down in France while it’s coming up in the U.S.,” Hakim said in an NPR interview associated with his research.He ignores two huge facts.First, France's herbicide use may be down somewhat over time, but it's still equivalent, pound for pound, to North American use.And second, France's fungicide and insecticide use--calculated at weight per acre--is many times the level used on North American crops.Can we agree that those are massive facts in this discussion? The French use more pesticide than the U.S. How do you miss that unless you’re intentionally missing it?  Particularly, how do you miss it if you've conducted, and announce it in your second paragraph, "an extensive examination by The New York Times."I’ve actually talked to actual American Midwest farmers. They’re spraying far less than they used to.And based on the French example, the French non-GMO farmer is spraying far more for insect pests than the GMO farmer.Don’t take my word for this stuff. Folks on all sides of the political and environmental spectrum have gone after the Times for bad science reporting. Mother Jones, the left-wing journal, is far, very far, from friendly to the GMO industry. It’s a regular, persistent thorn in Monsanto’s side. But even Mother Jones attacked Hakim’s work, in an article entitled, “How to mislead with statistics.” Was there intent on the part of the New York Times to deceive? Mother Jones writer Kevin Drum thinks so: “If you click on the chart pack in the Times story, you will actually find charts showing raw volume of pesticide use in the US and France. However, they're shown in two different charts, using different units, and broken up into different categories. If you were deliberately trying to make a comparison nearly impossible, this is how you'd do it.”And Grist, another pro-environment site, also attacked the Times piece. Both Grist and Mother Jones argued against the assumption that American farmers are uneducated, stupid, and prone to make costly errors in judgment.“It would be a shame if we on the liberal coasts decided the technology was useless just because we have a hard time seeing the benefits that are clear to Midwestern farmers,” write Grist’s Nathaniel Johnson.And here's the Mother Jones comment along a similar line. “The story was pretty shallow in its use of statistics. It assumed that you can compare different countries without controlling for anything (different soils, climates, crops, etc.). And it seemed to suggest that American farmers must be idiots, because they keep buying GMO seeds even though they're worthless.” Let me just say, if you’re the New York Times, “Ouch.”The good news, is that this example may suggest there's a crack in the armor of ends-justifies-means reporting. Let's have these conversations, but let's cut the self-serving prevarication and have the discussions on the basis of facts we can agree on.© Jan TenBruggencate 2016 3 Nov
Hawaiian shearwaters have a bellyful of plastic marine debris - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} Not just seabirds: An entangled Hawaiian monk seal.  Credit: NOAA.Laysan albatross chicks have been found dead with their bellies stuffed with bits of plastic, and a new study shows that Kaua`i-based Newell’s and wedge-tailed shearwaters face similar threats.Worse, the amount of plastic found in the seabirds is increasing over time.“On Kaua‘i…50.0 % of Newell’s…and 76.9 % of wedge-tailed shearwater … fledglings necropsied during 2007–2014 contained plastic items in their digestive tract, while 42.1 % of adult wedge-tailed shearwaters had ingested plastic'That is one conclusion of the paper, “Plastic ingestion by Newell’s (Puffinus newelli) and wedge-tailed shearwaters (Ardenna pacifica) in Hawaii.” It was published in the journal Environmental Science and Pollution  Research by Elizabeth C. Kain, Jennifer L. Lavers, Carl J. Berg, Alexander L. Bond and André F. Raine.  The researchers also found that “For both species, the frequency of plastic ingestion has increased since the 1980s with some evidence that the mass and the number of items ingested per bird have also increased.”In fact, hundreds of marine species are threatened by plastic, which can mimic natural food sources, or be mistaken for food by seabirds, turtles, squids, fish, oysters, seals and others. The researchers in this paper looked at the stomach contents of seabirds killed by predators or collisions in the 2013-2014 nesting season. The results were compared with a study done in the 1987 season on Kauai, when 11 percent of the birds were found to have eaten plastics. For Newell’s shearwaters, that represents nearly a five-fold increase over  a quarter century.In both the Newell’s, a mountain-nesting bird, and the wedge-tailed shearwaters, which nest near the shore, the predominant color of ingested plastic was white.Both adults and fledglings had plastic in their guts. Since fledglings receive all their food regurgitated by their parents, the parents are presumed to have been delivering plastic-laced meals to their young.“Plastic ingested by seabirds has been shown to block and take up space in the digestive tract, contributing to dehydration and in some cases starvation,” the authors wrote.There is also suggestion in the scientific literature that the plastic can release chemical pollutants into the bodies of the birds, they said.“The amount of plastic in the oceans is increasing and poses an increased risk of entanglement, ingestion, and thus morbidity and mortality for marine life,” the authors wrote.National Geographic last year had a story that suggested that every seabird on the planet has or shortly will have a plastic ingestion issue. That story references this study, which makes the point that “this threat is geographically widespread, pervasive, and rapidly increasing.”© Jan TenBruggencate 201620 Oct
THere's an electric car in your future--and sooner than you think - Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif"; border:none;} A hot red Tesla S. Credit: TeslaElectric cars represent a fraction of the number of vehicles on the road, but that’s changing—and indications are it’s soon to be changing a lot faster. EV sales are picking up every month, according to www.ev-volumes.com.And if they're not quite increasing at an exponential rate, they are increasing real, real fast. In January 2014 plug-in car sales were about 15,000 globally. By January 2015 it was close to 25,000. And by January of this year 40,000.By the middle of 2016, it was approaching 70,000 plug-in cars sold every month. In these examples the sales include pure electric vehicles and plug-in hybrids.Those stats are from the consulting firm EV-Volumes, which says growth in the plug-in market is expected to be 57 percent higher in 2016 than 2015. It says that globally, about 60 percent of the plug-ins are pure electric and 40 percent hybrid.By September 2016, Hawai`i had more than 4,700 electric vehicles, 27 percent or more than 1,000 more than the year before. The state had more than 22,000 hybrid cars, up 6.5 percent from a year before.China is the biggest player in the electric vehicle field, followed by Europe, then the U.S. and Japan. This Forbes article cites a figure estimating 450,000 electric car sales in China in 2016. Europe is an interesting area. The Netherlands sees electric vehicles reaching almost 10 percent of every car sold. Holland is pushing to reach 100 percent electric car sales by 2025.Indeed, all of Europe is pushing hard to increase the numbers, with both subsidies for electric car buyers and aggressive goals. Germany is talking about requiring 100 percent of new cars to be electric by 2030. The car manufacturers have gotten the message. More and more of them are offering electric cars. Nissan says it expects 20 percent of its 2020 production to be emission-free. The push is not only to push electric vehicle sales directly, but to push back against polluting cars. Paris has banned the weekday use of cars built before 1997. The theory: they don’t have engines that are as efficient as those built during the past 20 years, and they don’t have the same pollution control equipment. "We know that the major source of pollution in Paris is traffic. Sixty-six percent of nitrogen dioxide and fine particles come from road traffic. And we know it's old cars that spew out the most toxic fumes. That's why we are progressively going to get rid of them,” said Christophe Najdovsky, the Parisian deputy mayor for transport and public space.If you’re in the market, know that the list of plug-in cars is a long one these days. In mid-2016, according to EV-volumes, Nissan Leaf led the market, following closely by Tesla’s Model S. Then come BYD’s Tang and Qin models, Chevy Volt, SAIC Roewe E550, Mitsubishi’s Outlander, Renault’s Zoe, BYD’s e6, BMW’s i3 and Tesla’s Model X—and more than a dozen others.You may not recognize some of those names. BYD is a Chinese car manufacturer. SAIC is a British-Chinese company. Another sign that the industry is maturing: None of the cars on the list is a golf-cart looking thing. They’re all sedans or SUVs.For early adopters, the idea that their hot new EV looks just like your father's sedan could be a problem. But the industry isn't just going for early adopters any more.  © Jan TenBruggencate 201616 Oct
Forgive me, techies, but here are the seven reasons why Silicon Valley likes Trump - Deregulation and more favorable tax treatment of repatriated funds are high on the list. I have to admit that I am cringing even as I key this in, but there are some very good reasons why many in tech are actually welcoming the incoming administration of Donald Trump. I spent the last few weeks talking to a range of Silicon Valley leaders, all of whom will only talk off the record, because, well, Trump. All of  them to a person were against him, some even voicing public opposition, during the campaign, and they all lent strong to tepid support to Hillary Clinton. That said, most indicated that they are seeing what they consider some promising signs from the new power structure in D.C. I know, I know, I know. But I don’t make the news — I just type it over here on this keyboard. And I have written a lot about why many in tech think the incoming president is very bad news for tech already, citing numerous reasons, many of which are actually scary, too. Among them: A series of disturbing comments on immigration, all of which point to a xenophobic attitude against the very kind of hard-working and innovative people who built Silicon Valley; deeply entrenched misogyny that is at cross purposes to what tech purports to support (and never really does, but that’s another story!); a tin ear to diversity, too, again still a weakness of tech; a president with a firm grip on Twitter, but a 1960s mentality on what tech is and is going to be; the possibility of a very damaging series of trade wars; and, of course, a very real chance of destabilizing the world stage with reckless policies that could maybe be canny if they did not seem so profoundly of the moment (you know: China bad, Russia good, we begin bombing Vanity Fair in five minutes). Oh yeah, encryption — that’s gonna be ugly. More importantly, for Silicon Valley leaders, these are some of the things that a huge swath of their employee base in tech despise and are pressuring their leadership to resist more vocally and strongly. They have already spoken out in many public forums, and I most definitely get dozens of emails a day from across the spectrum of companies from those unhappy that their bosses have become passive to the point of comatose in the wake of Trump’s victory. Well, let me illuminate you, because — while everyone knows I think those leaders who went to see Trump without making some firm public statement of core values and issues were pretty wimpy to do so — they seem to have their reasons that go beyond voicing moral objections, calls for non-engagement and behaving with outright antagonism. Number 1: Simply put, said one person: “He’s going to be president and also he’s going to be president.” Oh. I had no idea. Thanks for the pro tip! Said another, “We do not stop being a business just because we did not vote for the person running our country.” “We have to function,” said another. Most of these same people say this with a clear cringe in their voice at the prospect of the former reality TV star and real estate mogul at the head of government, but they say it nonetheless. You might call this the Neville Chamberlain feint, after the man who was the British prime minister as the Nazis rose to power, but that would be rude. Number 2: Some of those I spoke to said that the Trump transition staff — led by investor Peter Thiel and others — has been much more engaged than previous administrations in reaching out. More so, several insist, than the Obama administration and also the Bush folks. This I believe less. I think it is that they are more surprised that there is outreach at all from those whom they very much opposed and from someone who had been pretty hostile to tech overall in the campaign. After you slime someone as not so cool (see Elon Musk, see Jeff Bezos, oh, see a bunch of them), it’s always a bit of a shock to get an invite to the tower and free branded water instead of a cudgel in the dungeon. Still, they are feeling paid mind to, which they love; they think that there is a dialogue that is starting to take place, which they cling to like there is no tomorrow (even though there might not be!). “We discussed substantive issues at the meeting,” said one person at the recent tech leaders’ confab with Trump. “And Trump seemed fully engaged with our suggestions.” Yeah, he’s good at giving the people what they want, for sure. “We’ll get right on that!” “We’ll fix that!” “My guy will call your guy!” It is probably a relief from the smarty-pants Obama people who actually raised reasonable objections and wanted to debate the issues. Number 3: In that vein, it is clear that the Trump cabal is aiming to either deregulate or make it easier to play in already regulated industries. This is nothing but good for the we-love-to-disrupt crowd of tech, because they don’t have to contend with all the prickly regulators that get in the way of a good time. So a self-driving car runs a red light, big whoop! Privatizing space would be really smart, because Mars living is cool! Social media giants should not be held responsible for fake news, because, um, well, just because! Guns don’t kill people, people ... wait that is another industry. “We can grow faster with an administration that is friendlier to business,” said one leader, “and then raise our specific objections when they get out of line on social and other issues.” See Chamberlain above, which now feels a little less rude. Number 4: That is also the expected behavior toward the some key agencies in the federal government that pertain more to tech interests, such as the FDA and the FAA. Silicon Valley has gotten a lot of push back and slow down from both these agencies, which has been vexing to some involved in digital healthcare, self-driving cars, drones and more. A faster-moving and possibly defanged group of overseers here is better for tech, which often pushes the borders in good ways (see Color Genomics) and bad (see Theranos). Number 5: It’s the same thing with contending with overseas markets, of course. Despite fears of a damaging trade war across the world, which is usually not good for anyone, there is also some solace from the idea that the Trump people could push back hard against foreign governments where U.S. tech has had some trouble. That’s clearly true in China, where very few have gained any significant foothold. Consider Facebook — while it will not give a timeline, most insiders acknowledge that it will eventually enter the market in order to maintain its massive growth. And when it does, it’s got to have more than its nifty messaging service to deal with the Chinese. Or Google in Europe — while President Obama has defended it there (see my interview here with him) from all sorts of monopoly and privacy allegations, I’d expect Trump to be even more vociferous. “He seems intent on leveling the playing field in overseas markets and that’s good,” says one person. Number 6: One of the more surprising responses I got from leaders I talked to was that a Trump administration might mean more unusual selections for key jobs across government. “He has some awful picks and then some ones that make you think that maybe an outsider can make a difference in that muck of D.C.,” said one person, who pointed to Trump’s secretary of state choice, Exxon Mobil CEO Rex Tillerson. Many in Silicon Valley pointed to Tillerson, in fact, as the kind of global thinker and doer that policy needs. I suppose that love of the outsider shaking things up has a big appeal to Silicon Valley leaders, who love to think of themselves as disrupters, even if they are more of the permanent ruling class than they care to admit. Also liked is secretary of transportation choice Elaine Chao and Treasury secretary pick Steve Mnuchin (who doesn't love a banker!). No one likes Ben Carson for HUD, but it hardly matters to tech. They all cringed at chief strategist Steve Bannon, which seems the right response, although every single person I spoke to thinks Bannon is deeply intelligent. Lucky Number 7: Finally, there is what I think is the main reason for tech to like Trump: He will be the one to get the $2 trillion in overseas assets repatriated to the U.S. at a reasonable tax rate. While Obama was not able to pull it off, Trump has all the pieces on the political board game and is going to use them for his own gain as well as for American businesses with money stuck elsewhere. What will be most interesting will be the dance the tech companies will do with Trump for this mountain of dough. Will they be forced into making promises to open U.S. manufacturing facilities, to pay for job retraining and to invest heavily here? They will! Will we all have to someday endure the photo op from a tour of those facilities with, say, Trump and Apple CEO Tim Cook? We will! Will any of it make a real difference? Unclear! Also related will be the many gimmes from the massive infrastructure spending that will take place. Will that include things like sensors in roads to push the development of self-driving cars? Will it mean spending on a more robust digital access network? Will there be an opening for tech in how and where the money is spent to invest in businesses that will be created? You know there will, which is why you should not be surprised by just how cooperative the liberal and socially conscious Californians could be. Sure, they hate Trump the man, but Trump the president is another story altogether. Subscribe to Recode Newsletters Recode Daily Top stories of the day. Recode Event Updates Our signature events sell out quickly. Be amongst the first to know. Recode Product Updates Special series, exclusive interviews and new features. Go 17:04
Rent the Runway has raised a $60 million investment led by Fidelity - The company was profitable in 2016 on revenue north of $100 million. In March, Rent the Runway CEO Jennifer Hyman told Recode, “I think you need to assume it’s impossible to raise equity financing for the next two years.” Just nine months later, Rent the Runway has closed a new $60 million equity investment led by the mutual fund company Fidelity with additional money from existing investors like Bain Capital Ventures and TCV. What changed in that time? The startup best known for its dress rental business put together a profitable year on an Ebitda basis while growing its revenue to well over $100 million. Rent the Runway also launched a new product — a $139-a-month rental subscription for everyday workwear — that accounted for more than one-fifth of total company revenue in its first year. “[G]iven a tougher market over the past year or so, I wanted to put the company into a sustainable financial situation where we were not beholden to external swings in the economy,” Hyman wrote in an email. “We achieved that and saw that the market rewarded the strong financial foundation we had built.” Hyman said the new money will help Rent the Runway grow its 1,000-person business quicker than it otherwise could. But the CEO said she doesn’t expect marketing spending to grow to much more than the 4 percent of revenue that it currently does. The deal underscores a renewed focus on profitability for investors and fast-growing startups alike. Just this week, news broke that the subscription meal-kit company Blue Apron — another Fidelity portfolio company — was pausing its IPO plans to focus on widening its profit margins. The investment is another bet by Fidelity, which has also backed both Uber and Airbnb, on the so-called sharing economy. In this instance, the belief is that more women will view clothing rentals beyond dresses as the norm in the future. The valuation of this Series E investment was a “significant step up” from the $520 million valuation Rent the Runway earned when it raised a $60 million round in 2014. Hyman declined to provide more specifics on the new valuation other than that it was based on the types of metrics and multiples on which public companies are valued.” “I didn’t really see a purpose to having an outsized valuation right now that was way ahead of our growth and potentially make it harder for us to have optionality over the next few years,” Hyman said in a phone interview. Rent the Runway was founded by Hyman and her co-founder Jennifer Fleiss in 2009 to give women a rental alternative for designer dresses they might normally purchase for one-off events like weddings and galas. Since then, the business has built a strong following among 20- and 30-something women, and has made moves to broaden its reach in the last year. In March, Rent the Runway launched Unlimited, a $139-a-month subscription that was the culmination of 20 months of tinkering with other monthly subscriptions that didn’t work. Unlimited lets women rent out three articles of clothing or accessories at any given time — think a jacket, a blouse and a purse — with free deliveries and returns and unlimited swap-outs. This month, the startup unveiled a $65 monthly subscription called StylePass that is good for the rental of one article of clothing per month. The startup also operates about a half dozen brick-and-mortar showrooms, including one inside a Neiman Marcus department store with more planned. “We think as long as women are viewing renting clothes as a normalized behavior, they will flip between different ways of renting throughout their life,” Hyman said. Subscribe to Recode Newsletters Recode Daily (Top News) Events Products Go 11:32
A journalist is suing U.S. spy agencies for more details on Russia’s hacking of the U.S. election - The CIA, FBI, Department of Homeland Security and the Office of the Director of National Intelligence have failed to respond to a Freedom of Information Act request. A lawsuit was filed yesterday against the CIA, FBI, Department of Homeland Security and the Office of the Director of National Intelligence for failure to comply with a Freedom of Information Act request seeking records pertaining to Russian interference with the recent U.S. presidential election. Jason Leopold, an investigative reporter who frequently writes for Vice, and Ryan Shapiro, a PhD candidate at MIT and research affiliate at Harvard who is known for his activism around the release of government records, filed the lawsuit after never receiving word as to whether or not their petition for expedited processing of their information request would be granted. Specifically, the FOIA requests seek information Congress may have received to or from federal intellegence agencies that reference terms  like CrowdStrike, Fancy Bear, Guccifer 2.0, related IP addresses and other terms that surfaced in relation to the hacking of campaign-related systems in the run-up to the campaign. Leopold and Shapiro are also requesting communications between FBI director James Comey and the White House about publically accusing Russia of interfering with the election. According to the Department of Justice, agencies are required to notify the party who issued the information request within 10 days of receiving a letter asking for expedited processing. Leopold and Shapiro first sent their FOIA request to the FBI, DHS, CIA and the ODNI on Dec. 14 and a second request to the CIA on Dec. 15. The CIA is now extremely confident that Russia interfered in the months leading up to the U.S. presidential election in order to bolster Trump’s chance of securing the presidency, according to a report earlier this month in the Washington Post. In October, seventeen U.S. federal intellegence agencies publically concluded the breach of Democratic National Committee and the email account of John Podesta, Hillary Clinton’s campaign manager, was the work of hackers working for Russia. The plaintiffs seek information about any ongoing investigation into ties between Donald Trump or anyone associated with his campaign and interference from the Russian government in the election. Leopold and Shapiro also asked for communications between Congress and the Republican and Democratic campaign committees, as well as Hillary Clinton’s campaign in reference to Russian involvement with the 2016 presidential race. WIth Trump’s inauguration only weeks away, there’s a rush to understand how the results of the narrowly won presidential election were affected by interference from Russia. The Russian hacking of the 2016 election marks the first time a foreign power has tried to sway an American election and undermine the democratic voting process with this level of technical sophistication. President Obama, who earlier this month vowed to retaliate against Russia for its tampering with the U.S. election, is now working with his administration to apply an executive order passed last year that permits the U.S. to issue sanctions on overseas individuals that attack computer systems related to critical infrastructure, like transportation systems or a power grid, or seek to gain a competitive advantage through commercial espionage by hacking online systems, according to the Washington Post. But U.S. electoral systems are not currently considered critical infrastructure protected under the Department of Homeland Security. The fact that the electoral systems are not protected as critical infrastructure led to a race before the election to secure electronic  voting machines and voter registration databases nationwide. Before Election Day, state voter registration databases in Illinois and Arizona were found to be hacked. Leopold and Shapiro are paying for their lawsuit with a crowdfunding campaign called Operation 45 that has collected over $30,000 in the past thirty days. Subscribe to Recode Newsletters Recode Daily (Top News) Events Products Go 10:30
Recode Daily: Our guide to 2017 - It’s happening. In this year-end edition of Recode Daily, we take a  look ahead at 2017 and a look back at some of the best tech reporting of 2016. Here are a few things Recode’s reporters and editors expect to see in the coming year: In tech: IPOs from Snapchat and Uber, a million drones in the sky, stiff competition for Amazon’s Echo and the 10th anniversary of Apple's iPhone. In media: Ongoing issues with fake news and the filter bubbles that come with personalization, more media mergers and more choices for digital TV services. And here’s some good reading from the last year: The Wall Street Journal’s ongoing coverage of blood-testing startup Theranos, from the inaccuracies in its equipment, to its retreat under regulatory sanctions, to the profile of the former employee who blew the whistle. The New York Times on Pokémon Go and augmented reality and its blow-by-blow breakdown of the Russian hack of the DNC. ProPublica on the bias built into an algorithm used to predict future criminal behavior, Motherboard on how to build a bot that isn’t racist and Nature on the difficulty of assessing the social impacts of artificial intelligence. The Financial Times’ coverage of Apple, including its secret teams working on virtual reality and cars. Vulture on how Peak TV is reshaping the television industry. Also, on the latest episode of Too Embarrassed to Ask, Lauren Goode and Peter Kafka break down all the different streaming services, and why your favorite TV show isn’t on Netflix. Top Stories From Recode Chinese electronics firm LeEco won’t be able to close its Vizio purchase this year Regulatory hurdles are slowing down the $2 billion deal. Jawbone says Fitbit is no longer seeking to block sales of its products Of course, Jawbone isn’t selling that many fitness trackers these days either. Apple has pulled all of Nokia’s Withings products from its online store The move comes after Nokia sued Apple for patent infringement in courts across the globe. This Is Cool 127 things that happened in 2016 in one drawing From Harambe to “Hamilton.” Subscribe to Recode Newsletters Recode Daily (Top News) Events Products Go 06:24
Recode Daily: Our guide to 2017 - It’s happening. In this year-end edition of Recode Daily, we take a  look ahead at 2017 and a look back at some of the best tech reporting of 2016. Here are a few things Recode's reporters and editors expect to see in the coming year: In tech: IPOs from Snapchat and Uber, a million drones in the sky, stiff competition for Amazon's Echo and the 10th anniversary of Apple's iPhone. In media: Ongoing issues with fake news and the filter bubbles that come with personalization, more media mergers and more choices for digital TV services. And here's some good reading from the last year: The Wall Street Journal's ongoing coverage of blood-testing startup Theranos, from the inaccuracies in its equipment, to its retreat under regulatory sanctions, to the profile of the former employee who blew the whistle. The New York Times on Pokémon Go and augmented reality and its blow-by-blow breakdown of the Russian hack of the DNC. ProPublica on the bias built into an algorithm used to predict future criminal behavior, Motherboard on how to build a bot that isn't racist and Nature on the difficulty of assessing the social impacts of artificial intelligence. The Financial Times' coverage of Apple, including its secret teams working on virtual reality and cars. Vulture on how Peak TV is reshaping the television industry. Also, on the latest episode of Too Embarrassed to Ask, Lauren Goode and Peter Kafka break down all the different streaming services, and why your favorite TV show isn't on Netflix? Top Stories From Recode Chinese electronics firm LeEco won’t be able to close its Vizio purchase this year Regulatory hurdles are slowing down the $2 billion deal. Jawbone says Fitbit is no longer seeking to block sales of its products Of course, Jawbone isn’t selling that many fitness trackers these days either. Apple has pulled all of Nokia’s Withings products from its online store The move comes after Nokia sued Apple for patent infringement in courts across the globe. This Is Cool 127 things that happened in 2016 in one drawing From Harambe to "Hamilton." Subscribe to Recode Newsletters Recode Daily (Top News) Events Products Go 06:24
Social media, which now divides us, doesn’t have to, Hello founder Orkut Büyükkökten says - Büyükkökten, who started Google’s first social network, Orkut, says his new social app will help you make friends. [audio] Conventional wisdom holds that technology, and our addictions to it, are making us more isolated. But Orkut Büyükkökten says that’s wrong: Tech can actually make friendships easier, and we just haven’t found the right app yet. “My biggest passions in life are people and connecting people through technology,” Büyükkökten said on the latest episode of Recode Decode, hosted by Kara Swisher. “If I look at society today, I believe that 99 percent of us need to connect more.” Büyükkökten founded his first social network, Club Nexus, while he was a grad student at Stanford during the first dotcom boom. Its pre-Friendster success led him to Google, where he founded that company’s first social network, which was codenamed Eden but ultimately named after him, Orkut. However, Orkut “wasn’t ready to scale at launch” and only took off in a handful of countries like Brazil and Estonia. Now having left Google, Büyükkökten is back with a new social startup called Hello Network, which promises to make it easier to find new friends. “If you look at humanity, it’s a complex network with 7.4 billion individuals,” he said. “We have such a huge need to connect and connecting is getting harder and harder, even though there’s a lot of technology that should enable us to connect easier.” The problem with today’s social media sites, Büyükkökten argued, is that they reduce people to “highlight reels” rather than encouraging them to be authentic with one another. “I could have two friends who are about to get divorced, but they would post a picture on Facebook where they are having a picnic happily,” he said. “I know it’s not real, and everyone who’s looking at it thinks that it is.” “We create trust within each other by sharing,” he added. “We need to be able to share our true selves, and that’s how we dissolve the walls between us that separate us.” You can listen to Recode Decode in the audio player above, or subscribe on iTunes, Google Play Music, TuneIn and Stitcher. If you like this show, you should also sample our other podcasts: Recode Media with Peter Kafka features no-nonsense conversations with the smartest and most interesting people in the media world, with new episodes every Thursday. Use these links to subscribe on iTunes, Google Play Music, TuneIn and Stitcher. Too Embarrassed to Ask, hosted by Kara Swisher and The Verge's Lauren Goode, answers the tech questions sent in by our readers and listeners. You can hear new episodes every Friday on iTunes, Google Play Music, TuneIn and Stitcher. And Recode Replay has all the audio from our live events, including the Code Conference, Code Media and the Code Commerce Series. Subscribe today on iTunes, Google Play Music, TuneIn and Stitcher. If you like what we’re doing, please write a review on iTunes — and if you don’t, just tweet-strafe Kara.04:30
As the Cavs battled the Warriors, we checked out the NBA’s new fantasy app - The iOS and Android app was co-developed with FanDuel. Sunday’s Christmas Day matchup between the Golden State Warriors and the Cleveland Cavaliers served as a rematch of last year’s NBA finals and a chance for Bay Area sports fans to get some revenge. For me, it was also a good chance to try InPlay, a new fantasy app from the league. The game, co-developed with FanDuel, aims to get people watching TV broadcasts longer. More minutes watched equals more ad dollars for the league and its broadcast partners, who pay big bucks for TV rights. In deciding to give InPlay a try, I was hoping to answer a couple big questions. One, would this add to my enjoyment as a fan, or would it just be a distraction? And two, is this really likely to be a big deal for either the NBA or FanDuel? First off, the app (which works on iOS and Android) is not a way to make money, like FanDuel’s paid gaming apps are. This is a free app, with a couple prizes for the top players in the country. Think of it as entering a contest as opposed to going to Vegas and gambling. The way InPlay works is, you follow along with any of the national TV broadcasts and pick one of the two teams (I chose the Warriors). Then you choose one player to be your guy for each quarter, but you can’t pick the same player for more than one quarter. How that player performs determines your success in the app. You gain points for things like rebounds, assists and made baskets, while losing points for a turnover. Well ahead of tip-off I had chosen my lineup: Kevin Durant for the first quarter, Klay Thompson for the second, Draymond Green for the third and Steph Curry for the fourth. Now the only decision I had to make while watching the game was when to use my “turbo” powers — an option that lets you score extra points for just under a minute on anything good done by your team and player.. Screenshot by Recode Not doing too bad at the end of the first quarter thanks to my pick of Kevin Durant. I used two in the first quarter on Kevin Durant, one as he made a rebound and the other as he seemed likely to hit a three. (He missed but was fouled on the next play, sending him to the line). So here’s my verdict after just one quarter of play: The NBA got the mix pretty well in terms of giving attention-challenged fans (like me) a way to do something additional during the game without distracting them too much from the core task of rooting for their team and yelling at the refs. I remember the NHL trying a game like this a couple years ago where you had to pick who would win each face-off. It was both too time-consuming and way too random to be any fun. InPlay is fun so far, but it has a long way to go in terms of getting any significant number of people involved. I could see in the app while I played that there were just a few hundred people playing along with me. As of the first quarter I was doing pretty well, frequently in the top 20 and ending the quarter at No. 26. Subscribe to Recode Newsletters Recode Daily (Top News) Events Products Go 25 Dec
Chinese electronics firm LeEco won’t be able to close its Vizio purchase this year - Regulatory hurdles are slowing down the $2 billion deal. Chinese electronics maker LeEco will have to wait until the new year to complete its $2 billion deal to buy TV maker Vizio, Recode has learned. The deal has won approval from U.S. regulators, but Chinese authorities have yet to give their blessing. “We are awaiting regulatory approval in China,” a LeEco representative said in response to an inquiry from Recode. “We are hoping for early Q1.” LeEco has been looking to the Vizio deal to give it added credibility and retail presence in the U.S. as it aims to rapidly grow its operations. The company has been looking to make a big splash here, acquiring Yahoo’s former Santa Clara campus for $250 million and hiring several top executives. The company’s outspoken chairman, Jia Yueting, recently acknowledged, though, that the company had been growing faster than it could handle and promised to operate in a more sustainable way. “No company has had such an experience, a simultaneous time in ice and fire,” Yueting said in a November memo, according to Bloomberg. “We blindly sped ahead, and our cash demand ballooned. We got over-extended in our global strategy. At the same time, our capital and resources were in fact limited.” However, U.S. officials insisted at the time that the company was continuing ahead with both the Vizio purchase as well as its efforts to bring its existing TV and smartphone lineup to the U.S. LeEco has since expanded beyond its own website to also sell through Best Buy, Amazon and Target.com.24 Dec
Can Gut Bacteria Control Your Mood? - You are never truly alone. Everywhere you go, you carry trillions of microbes inside your gut. These countless creatures, invisible to the naked eye, form what scientists call the microbiome — a thriving ecosystem intimately connected with your body. These microbes, though, are no mere travelers hitching a ride just for fun. They are in constant communication with your body and brain. As scientists delve more deeply into this area of research, they are starting to find that the microbiome may influence you not just physically, but emotionally as well. Research into the microbiome is a relatively new field, with some ideas put forth being little more than speculation. In a new book [https://www.amazon.com/Mind-Gut-Connection-Conversation-Impacts-Choices/dp/0062376551/], “The Mind-Gut Connection,” Dr. Emeran Mayer, professor of medicine and psychiatry at UCLA, tries to take what is known about the microbiome and  speculate on what this means for our mental and physical health. Most of the microbes in the gut live on the mucus layer that covers the gut’s surface, very close to the nerves and receptors of the intestines. This allows them to communicate with the body and brain through signaling molecules. These are similar to the neurotransmitters and hormones used within the body. However, this is a two-way street. For example, norepinephrine released by the body during stress can alter the genes expressed by the gut microbes and cause them to release different molecules. In this way, the microbes can tell what mood you are in. The signal from your body’s stress response reaches every part of the body, but the gut plays a special role because it has so many immune, nerve, endocrine and hormonal cells. Any level of stress can have a large influence on the gut, which may be why we feel anxiety in the “pit of our stomach.” When stress runs amok, it can also impair normal gut functioning, leading to bloating, constipation and diarrhea — all symptoms of irritable bowel syndrome. Research even suggests a connection between the gut and mental health. Some studies have found that certain people develop both gut and emotional symptoms, although it’s difficult to know which one is driving the other. In an interview with WBUR [http://www.wbur.org/commonhealth/2016/09/16/the-mind-gut-connection], Mayer said that in about half of the patients in those studies, the symptoms started with anxiety and depression, while in the other half, abdominal pain came first. So can microbes in our gut affect how we feel? Many of the microbes in the body live in the large intestine, where they break down the part of the food that has made it that far into smaller molecules. This includes molecules used by the microbes in signaling the body — such as short-chain fatty acids. These molecules bind to receptors on the intestinal cells, which then release hormones that tell your brain that you are full and satisfied. However, Mayer said that scientists don’t know yet whether microbes can alter our behavior. He suggests that communication between the gut microbes and the dopamine reward system in the brain could influence what we eat, or even what foods we seek out and buy. His research has also shown that the microbiome is linked to parts of the brain’s reward system. More research is needed, though. Although Mayer uses probiotics — both pills containing certain types of healthy intestinal bacteria and various fermented foods — when treating patients with gut conditions, he doubts that a single probiotic pill will be able to change your mood by affecting the microbiome. These kinds of effects have been seen in studies with rats and mice, but the human brain and emotions are much more complex. Each person’s microbiome is unique, like a bacterial fingerprint. The composition of the microbiome can shift over time, but is heavily influenced during the first three years of life. However, even if the microbiome can influence the body and emotions, that doesn’t mean we have no free will. People have other ways to override the influence of the microbes, if needed. Strategies like mindfulness-based stress reduction and lifestyle changes are already used to treat certain gut disorders. Mayer’s group is currently studying whether these techniques can also improve the health of both the brain and gut microbes. These may even undo some of the changes made to the microbiome from a person’s early life experiences.26 Dec
The Neurobiology of Love - For anyone who has watched a friend fall in love, Nietzsche’s words from Thus Spoke Zarathustra might ring true: “There is always some madness in love. But there is always, also, some method in madness.” This “madness” is often clear, even from the outside. How many times have you questioned your friend’s judgement in choosing a partner or felt annoyed at their giddy euphoria? Less apparent is the “method” — or reason — in this madness, which is why poets, musicians and novelists have been searching for the perfect way to describe love for centuries. It turns out, though, that neurobiologists have quite a lot to say about this great mystery of life. In recent years scientists have been examining the biological underpinnings of romantic love, searching for a way to understand how love can drive people to such wild behaviors. What we experience as romantic love is partially driven by neuromodulators [http://onlinelibrary.wiley.com/doi/10.1016/j.febslet.2007.03.095/full] such as dopamine. This chemical is involved in both sexual arousal and romantic feelings. It also has an important role in the brain’s pleasure and reward pathways. Neurons in the brain release dopamine when we take part in activities related to our survival or producing offspring, such as eating sugary foods, having sex and falling in love. This flood of dopamine gives us a sense of satisfaction that encourages us to continue. The importance of dopamine in mating has been studied in prairie voles [http://www.sciencedirect.com/science/article/pii/S030645221101284X]. Both male and female prairie voles develop an attachment to their partner after only a single mating, a development that depends upon dopamine. In one study [https://www.ncbi.nlm.nih.gov/pubmed/10718272], when researchers activated the dopamine receptors in the male vole’s nucleus accumbens — part of the reward pathway — the male became attached to a female even without mating. Blocking the same dopamine receptors prevented males from developing a partner preference, even when oxytocin — the “cuddle” hormone — was present. Research in people has found a similar role for dopamine during romantic love. In one study [http://jn.physiology.org/content/94/1/327], when people looked at a picture of someone they were “in love with” the dopamine-rich areas of the reward pathway were activated — the same regions that turned on in response to the promise of a monetary reward. Although dopamine is involved in creating the euphoric or elevated feelings that come while in love, it’s less of a “pleasure chemical” than it has been labeled in the popular press. Studies [http://jneurosci.org/content/30/18/6180.full] on roulette players found that gamblers show just as much dopamine activity in the nucleus accumbens when they have a near-miss as when they win. This may be why unrequited love can have such a strong hold over us. Other neuromodulators help produce the experience of romantic love, including oxytocin and vasopressin. These chemicals are involved with bonding and attachment, both maternal and romantic. They are released during childbirth, breastfeeding and orgasm. They also interact with the dopamine reward system [http://www.sciencedirect.com/science/article/pii/S030645221101284X] — by stimulating the release of dopamine, they make love a rewarding experience. Some studies suggest that oxytocin and vasopressin receptors play a role in the monogamous behavior of prairie voles. Compared to the promiscuous montane vole, the prairie vole has a higher density of oxytocin receptors in the brain — including in areas involved in reward and emotion-related memory formation. When prairie voles mate, oxytocin and vasopressin are released into the brain. If this release is blocked [https://www.ncbi.nlm.nih.gov/pubmed/11508730], the prairie voles no longer develop partner preferences and become promiscuous like the montane vole. Romantic love can even have a dampening effect on certain parts of our brain, again suggesting a form a “madness.” When people looked at a picture of their beloved [http://jn.physiology.org/content/94/1/327], activity decreased in the amygdala, an area of the brain connected to fear and anger. One group of researchers [https://www.ncbi.nlm.nih.gov/pubmed/15006682?access_num=15006682&link_type=MED&dopt=Abstract] suggested that love turns down the fear response. This makes sense given that being vulnerable and building trust are essential parts of falling in love. Romantic love can also lead to decreased activity in the frontal cortex [http://www.sciencedirect.com/science/article/pii/S0014579307004875], resulting in a relaxation of the criteria that we use to judge other people. So when we are in love, we are less likely to see our beloved’s shortcomings. This decrease also occurs with maternal love — where according to mothers, their children can do no wrong. But while love can change how you judge the object of your affection, you are unlikely think differently about a book or a scientific work. Other decreases occur in areas of the brain associated with “theory of mind” and “mentalizing,” including the prefrontal cortex. These regions help us figure out what other people are feeling or planning. Theory of mind also comes into play when you need to distinguish between yourself and others. When you fall in love, though, that distinction diminishes. If your love is strong enough, you may even lose sight of your separateness from your beloved. So while love may feel at times like a form of madness, there is a sense of unity — or oneness — that poets have long sought to capture. _____ “If I go into the place in myself that is love, and you go into the place in yourself that is love, we are together in love. Then you and I are truly in love, the state of being love. That’s the entrance to Oneness. That’s the space I entered when I met my guru.” ~ Ram Dass [https://www.ramdass.org/the-entrance-to-oneness/]23 Dec
If You Want to Evolve, Be Mindful - By Deepak Chopra, MD and Rudolph E. Tanzi, PhD Human beings are unique in the scenario of life on Earth–that much is obvious. We are guided by awareness, and to implement our wishes, dreams, and inventions, the higher brain (chiefly the cerebral cortex) has evolved to extraordinary proportions. Although classical Darwinism is mindless, and staunchly defended as such by strict materialists, Homo sapiens is no longer caught in the clutches of natural selection. As we know, human society is very different from the state of nature. Chimpanzees don’t get their food at the grocery store, and we don’t get ours by fighting with rivals in the treetops. So the real dilemma isn’t whether human evolution is guided by mind, because clearly it is. What remains puzzling is how much connection there is between our mind and our genes. There is no doubt that the  roughly 23,000 genes you inherited from your parents remain the same throughout your lifetime. If the genetic blueprint was as fixed as an architect’s plans, there would be no mind-gene connection. You would be the puppet of DNA, mechanically carrying out whatever actions are programmed into the 3 billion base pairs that constitute the human genome. To defenders of strict Darwinism, the difference between instinct, which controls animal behavior, and mind, which gives freedom of choice, is lost. But no one who isn’t harping on an agenda could claim that a Mozart symphony or the ceiling of the Sistine Chapel was created by instinct. The range of the human mind is vast and creative. But as we create the complex human world, are our genes listening? If so, are they cooperating in our creative enterprises? The answer is yes. Over the past two decades, the new genetics has made major discoveries that validate the mind-gene connection, opening up the promise that what lies ahead for the human race is mindful (that is, self-directed) evolution. To touch on the major discoveries that led to this turning point in evolution, the following points should be considered parts of the same whole. Genes are active and dynamic, producing a wide array of proteins to perform different functions. These products can be modified by chemical marks from the area of the genome that acts as a switching station for gene activity, known as the epigenome These switches can be simple on/off switches or act more like rheostats. Further gene modification can take place by the way a strand of DNA is folded in on itself (with the help of supporting protein called histones), bringing separate sequences into close proximity with each other creating “gene neighborhoods”. Each individual’s genetic makeup is vastly expanded by the bacteria that inhabit the human body, chiefly the intestinal tract, known as the microbiome, sometimes referred to as our “second genome”. The DNA of microorganisms played a major part in how human DNA evolved. These microbes are not foreign invaders. They are a dynamic part of us. Besides getting incorporated directly into human DNA, microbial DNA contributes its own products to our bodies. For example, gut microbes are a major source of serotonin and dopamine, two major brain chemicals connected to mood, depression, and even success. They can even affect inflammation in the brain. Thanks to the dynamic epigenetic switching mechanisms and the microbial DNA that has been studied so far–a mere fraction of what exists–it is thought that gene activity responds to virtually every experience we have over a lifetime. These aren’t small tweaks to Darwinism but a window into a new world. In the Darwinian model, identical twins are genetic mirrors of each other, since they are born with identical genomes. But each twin lives their own life, and other factors, beginning with the response of the epigenome and microbiome, write a unique story for each. Thus one twin can be afflicted with Alzheimer’s, or be schizophrenic or obese, while the other is not. Medical science uses twin studies to pinpoint exactly how much contribution is made to various disorders by identical genes, and the typical answer is around 50%. In a word, the old battle between nurture and nature seems to be a tie. If outside influences create 50% of an illness, this implies at the very least, that a large contribution is being made to everyday behavior. If so, then it is possible to say, quite logically, that a person can self-direct his or her own evolution. This is true because unlike other species, human beings have a huge amount of control over our environment. How we nurture ourselves is up to us. A group of high-level scientists are participating in a project to study Self-Directed Biological Transformation (SBT), focusing on mind-body practices like meditation where people subjectively report that their lives are being changed. Do their reports of inner peace, mental clarity, increased happiness, and greater insight have a biological basis? If so, then genetic activity must be involved, because DNA lies at the basis of all bodily functions, down to the firing of individual neurons in the brain. It has been established for four decades that meditation brings benefits outside the mind, such as reducing high blood pressure and stress hormones. But why should genes be restricted to functions we happen to classify as physical when they are involved in the whole mind-body system? It was long past due to explore the mind side of the equation. To date, the SBT research has been very promising. Thousands of genes are affected in their activity by meditation, and the changes often begin within the first few days of routine meditation. These findings are being submitted for publication very soon. Across the board the new genetics is arriving at similar results. What this means, if we look just a bit further, is the following: Positive lifestyle choices, because they speak directly to our genes, may drastically reduce risk for chronic disorders like heart disease, Alzheimer’s, and type 2 diabetes. Contemplative practices like meditation can play a key role in transforming genetic activity. As genetic activity is transformed, we can expect the subjective experience of life to be enormously enhanced. More and more the spiritual claims of the world’s wisdom traditions will be found to have a scientific basis through epigenetics. As we take more control over our genetic story, we will also affect our future generations. The notion that behavioral characteristics in parents might possibly be epigenetically passed on to their offspring is one of the most exciting frontiers in the new genetics. This has already been demonstrated in laboratory animals. A mouse who was conditioned to fear a certain smell as an adult was shown to pass this same fear onto her offspring by epigenetic changes in the genome (without the need for new genetic mutations). This is referred to as “soft” inheritance to the next generation. We can now begin to ask whether the same is true for human beings. Demographic studies of the Dutch famine during World War II and 9/11 are already hinting in this direction. We discuss these studies in our new upcoming book Super Genes (November, 2015) If this idea of trans-generational epigenetic inheritance is someday shown to also apply to human beings, no one knows how much benefit we may gain. But the benefits require self-awareness and mindfulness. We must make conscious choices that move our evolution toward the best in human nature while correcting the worst. The new genetics gives us this responsibility. As more results are validated, there will be no avoiding the choices that face us. The good news is that we will be rewarded for every positive choice by our genes, here and now and for the time to come. DEEPAK CHOPRA, MD, FACP, founder of The Chopra Foundation and co-founder of The Chopra Center for Wellbeing, is a world-renowned pioneer in integrative medicine and personal transformation, and is Board Certified in Internal Medicine, Endocrinology and Metabolism. He is a Fellow of the American College of Physicians, Clinical Professor UCSD Medical School, researcher, Neurology and Psychiatry at Massachusetts General Hospital (MGH), and a member of the American Association of Clinical Endocrinologists. The World Post and The Huffington Post global internet survey ranked Chopra #17 influential thinker in the world and #1 in Medicine. Chopra is the author of more than 85 books translated into over 43 languages, including numerous New York Times bestsellers. His latest books are Super Genes co-authored with Rudolph Tanzi, PhD and Quantum Healing (Revised and Updated): Exploring the Frontiers of Mind/Body Medicine. www.deepakchopra.com DR. RUDOLPH E. TANZI, is Professor of Neurology and holder of the Joseph P. and Rose F. Kennedy Endowed Chair in Neurology at Harvard University. He also serves as the Director of the Genetics and Aging Research Unit and as Vice-Chair of Neurology at Massachusetts General Hospital. Dr. Tanzi was named to TIME magazine’s “100 Most Influential People in the World” and The Harvard 100 – Most Influential Harvard Alumni.21 Dec
Awakening the Luminous Mind, Tenzin Wangyal Rinpoche - At the core of the Dzogchen teachings is the view that all sentient beings are primordially pure, perfected, and have the potential to spontaneously manifest in a beneficial way. This capacity is within each and every one of us. It is our nature, and yet we often find ourselves alienated and disconnected from ourselves and others as we rush about in our day-to-day lives. If one is willing to directly and nakedly encounter the experiences of one’s ordinary life, these experiences become the doorway to the realization of one’s nature, the doorway to the inner refuge. Pain can become the path home. Join Tibetan Bön meditation master Tenzin Wangyal Rinpoche as he invites you to engage in the practice of meditation and reflection, to look intimately within and discover the jewel that is hidden in your ordinary experiences. Explore how to honor and respect the three doors of body, speech, and mind, and recognize the opportunities for healing which life presents. Discover the inner refuge and the gifts of spaciousness, awareness, and warmth that bring healing and benefit not only to you but your relationships with others and the greater world. Geshe Tenzin Wangyal Rinpoche, founder and spiritual director of Ligmincha International, is one of only a few masters of the Bön Dzogchen tradition presently living in the West. An accomplished scholar in the Bön Buddhist textual traditions of philosophy, exegesis, and debate, Tenzin Rinpoche completed a rigorous 11-year course of traditional studies at the Bönpo Monastic Center (Menri Monastery) in India, where he received his Geshe, degree. In 1992 Tenzin Rinpoche founded Ligmincha International in order to preserve and introduce to the West the religious teachings and arts of the ancient Tibetan Bön Buddhist tradition. Rinpoche is known for his clear, lively, and insightful teaching style and his ability to make Tibetan practices easily accessible to the Western student. In addition to Ligmincha International’s affiliates in the United States, Rinpoche has  established centers in Central and South America, Europe and India, and has authored 9 books.20 Dec
Living Spontaneously With Wu-Wei - “Reintegration with Nature, which we are, is the recovery of spontaneity.” If you have ever watched an artist at work or an athlete on the field, you may have noticed how effortless their actions seem. As they paint or sculpt or kick a ball into the goal, they move in perfect harmony, in an almost unconscious, instinctual way. This is wu-wei, a state of perfect ease, or effortless action, described in ancient Chinese philosophy. Literally, it means “no trying” or “no doing,” but it is far from the kind of “dull inaction” you might find in someone just going through the motions. Wu-wei is both effortless and effective. It can also be applied to any activity. Under the influence of wu-wei, even the most mundane of tasks can be transformed into an artistic performance as body, emotions and mind are completely integrated. This state of being is described in a story about Butcher Ding, from a book called the Zhuangzi, an important work of Daoist philosophy. Ding is called upon to sacrifice an ox during a traditional ceremony for the consecration of a newly-cast bronze bell. Ding dismembers the massive animal with effortless grace, his blade cutting in exactly the right spots and at the best angles. Immersed in wu-wei, his mind is dynamic, spontaneous and unselfconscious. He has moved beyond the need for thought or effort, and transforms butchering the ox into a performance that would rival one by Pablo Picasso or David Beckham. Looking on, the village master Lord Wenhui is very impressed by Ding’s skill with a blade. After the ceremony, Wenhui asks Ding about his incredible abilities. In the book Trying Not to Try: The Art and Science of Spontaneity, Edward Slingerhand, a professor of Asian studies at the University of British Columbia, gives Ding’s response [http://nautil.us/issue/10/mergers–acquisitions/trying-not-to-try]. “When I first began cutting up oxen,” said Ding, “all I could see was the ox itself. After three years, I no longer saw the ox as a whole. And now—now I meet it with my spirit and don’t look with my eyes. My senses and conscious awareness have shut down and my spiritual desires take me away.” Ding’s response to Wenhui shows that wu-wei is not an inborn trait, but something that can be developed over time with practice and focus. In the beginning, when Ding looked at the ox, he saw it with his conscious mind—I know this is an ox of such-and-such size, needing to be cut in this way. Later on, this overt conscious awareness shuts down, leaving Ding’s “spiritual desires” to guide his hand as it holds the blade. He no longer needs to think consciously before he cuts. And as Ding relaxes into the moment, he is able to open more fully to the task at hand. Reading this story, you might get the sense that there are two separate Ding’s involved in butchering the ox—one that sits back and analyzes the contours of the ox or the angle of the blade, and another that, in the words of the Nike commercial, just does it. This type of split personality—where a conscious “I” confronts a part of the self that is more autonomous—may be familiar to many of us. How often have we found ourselves saying something like “I couldn’t stop myself from eating three pieces of pie” or “Whenever I talk to her, I have to bite my tongue.” As odd as it seems, this sense of self-against-other-self is not a misfiring of synapses, but results from how different areas of our brain interact. One part of our brain is fast and automatic, shaped by an evolutionary need for survival. It drives us to seek out food, find shelter and reproduce. We have no direct access to this part of our mind, which means that sometimes our conscious mind may struggle to control it. Another part of the brain is slow and conscious, concerned with long-range issues. We tend to identify with this part of our mind, because it is the seat of our conscious awareness, our sense of self. It also includes our verbal center. While the slow part tends to have rigid decision-making, the fast part is more flexible and can change its priorities based on new information. The goal of wu-wei, then, is to convince these two selves to work together smoothly and effectively. As Slingerhand puts it, “for a person in wu-wei, the mind is embodied and the body is mindful.” With Ding, this integration appears as an intelligent spontaneity perfectly suited for the situation. But it’s not always about letting go. Sometimes Ding’s conscious mind has to step back in when he confronts a challenging part of the ox. To understand how this coordination might occur at a neuroscientific level, it’s useful to look at the way the brain handles conflicts between thinking consciously and acting instinctively. Psychologists study this kind of interference using the “Stroop Effect” experiment [http://faculty.washington.edu/chudler/words.html]. During this task, a person is asked to name the color of several words on a page or screen. The words are the names of colors, each different from the actual color of the letters. This creates a conflict between the visual processing center and the word recognition system. To resolve this, the conscious mind has to override the automatic impulse of the visual system. This is an example of cognitive control, which encompasses many processes in the brain such as attention, memory, language comprehension and emotional processing. Two areas of the brain that are involved in cognitive control are the anterior cingulate cortex (ACC) and the lateral prefrontal cortex (lateral PFC). Research on how these areas work is still ongoing. In the Stroop task, the ACC may act like a detector—when it senses a conflict between the word color and the meaning of the word, it alerts the lateral PFC. The lateral PFC decides what action is needed based on its understanding of the task. It then strengthens the visual system to help identify the color and tones down the word recognition system. This process is not instantaneous, which is why most people hesitate when doing this task. There is also a sense of effort involved, as if the brain has to switch gears. So how does this apply to wu-wei? When you are first learning a new task, you have to keep a great amount of attention on the task to make sure you are doing it correctly. In this case, both the ACC and the lateral PFC are active as you consciously—and with effort—work through the task. Later on, as you master the skill, the brain’s control becomes automatic, which frees your conscious mind for other tasks. This was seen in a study [http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0001679], in which researchers did functional MRI scans on jazz pianists, first while they played a simple scale over and over again, and then while they improvised a melody. During improvisation, the ACC became more active, while the lateral PFC turned off. For musicians, improvisation is a kind of wu-wei, where the conscious mind lets go and the body takes over. This type of release, though, is not reserved for musicians and artists. We can apply wu-wei to our own lives, even to the most mundane of tasks. Instead of striving consciously all the time, if we apply spontaneity to our actions and interactions with others, we can live effectively and with ease.19 Dec
Open Your Senses - Image by: image: Ilaria Saltarella As embodied, sensitive human beings, it is likely at times that we will be asked to meet with waves of confusion, uncertainty, hopelessness, and disappointment. And in the wake of these feelings, to lose contact with who we are and why we’re here. To conclude that something has gone wrong and that we’ve been abandoned by grace. What was so clear only days or weeks ago is somehow no longer in reach. While it is natural to conclude that these feelings are clear evidence that we’ve failed and fallen short, a hidden call is emerging within, attempting to reach us and to break through an old dream of partiality. While this surge of integration can feel achy and can take us to the ground, it is an emissary of surrender and a portal into reorganization. In order to step through, we must replace the movement toward resolution and symptom-relief with curiosity and new levels of attunement and self-compassion. Look around. Listen carefully. Open your senses. There are images, feelings, messages, and signs, appearing out of the depths, with information for the creativity of ever-present rebirth. Through dreams, your imagination, pure vision, color, and sound – and through the drumbeat of the earth and of the heart – the doorways of the unconscious are opening, revealing wholeness in unprecedented and unknown ways. Slow down, breathe deeply, touch the ground, and open to the possibility that nothing has gone wrong. From the exact place you are in, you can open and soften into what is presenting itself. Even if you cannot understand it, if the mind is spinning in its attempt to put it all back together, again, slow down. The heart knows. There is a primordial permission inside the earth to fall apart and allow things to be reorganized. Underneath the ancient story of what is missing is an alive world of somatic information, inviting a new level of trust in your experience exactly as it unfolds here. The entirety of this material is valid, filled with life, and a luminous reflection of your unique path. Provide sanctuary for the material of the unseen as it emerges here, and as it seeks communion with you in creative and unknown ways. Offer a home for it and for the lonely, the confused, the hopeless, and the disappointed. For these ones are not obstacles, but allies of integration, filled with meaning and reminders of wholeness. In ways the mind may never understand, they are providing rest for the journey ahead.16 Dec
Complexity Theory and the Nature of Consciousness - When it comes to understanding the nature of consciousness in the universe, there are two main philosophical approaches. One is panpsychism, in which consciousness pervades the universe at all levels. The other is emergentism, in which consciousness only appears once the universe has reached a certain level of complexity. Complexity theory has often been seen as supporting emergentism, largely because of its apparent similarity. In complexity theory, groups of interacting units self-organize into larger-scale structures. This can be seen with groups of cells forming tissues or entire animals, animals working together in colonies, and collections of animals giving rise to ecosystems. In all these cases, the properties and structures found at higher levels arise from the bottom up, rather than through top-down planning and design. In spite of its emergent tendencies, some scientists say that if you apply the principles of complexity theory to all levels of scale in the universe — from the quantum realm to cities and ecosystems —complexity theory may actually provide support for panpsychism. In a paper https://www.upaya.org/uploads/pdfs/TheiseSentienceEverywhere.pdf and video: Neil Theise, MD, a diagnostic liver pathologist and adult stem cell researcher at the Beth Israel Medical Center of Albert Einstein College of Medicine takes this approach to consciousness, or what he refers to as sentience. Theise draws on the work of Francisco Varela and Humberto Maturana, who coined the term autopoiesis — which literally means “self creation.” Autopoiesis was Varela’s and Maturana’s attempt to define life, or the presence of sentience or “mind.” According to autopoiesis, a living system has four main features: 1) a boundary that separates the “being” from its surroundings, 2) processes that can sense and react, 3) a nervous system that connects external information with internal processes, 4) communication channels between the being and its external environment. These criteria can be applied equally to simple cells — such as the Paramecium — and complex organisms with a central nervous system such as people, elephants and whales. One key feature of autopoiesis as defined by Varela and Maturana is that it sets the lower limit of life as the cell. This implies that sentience in the universe only begins to exist at this level of complexity or higher. Theise uses complexity theory to expand on autopoiesis. The self-organizing nature of complexity theory occurs at all levels of scale, from the very simplest to the most complex. For this to happen, systems must display four characteristics: 1) a large number of interacting units, 2) negative feedback loops that maintain balance, 3) no sensing of the entire system by one individual component, and 4) limited randomness. To give a better sense of these characteristics, Theise uses an ant colony as an example. In this case, the individual units are the ants, which self-organize into a larger-scale structure, a functioning colony. If you don’t have a lot of ants, the colony cannot function at the higher level of complexity, and you end up with just a bunch of ants doing their own thing. Negative feedback loops in the ant colony keep the conditions of the colony within a constant range. In actuality, the level is not entirely static, but oscillates within that set range. A thermostat in a room represents a simple negative feedback loop — if the room is too hot, the thermostat turns off the furnace. In an ant colony, negative feedback loops keep the conditions of the colony from wildly fluctuating or shifting too far in one direction, such as preventing ants from gathering more food when they already have plenty. Also, there is no single ant monitoring the entire colony, although what each ant perceives and how they behave can influence the self-organization of the colony. Finally, there is some level of randomness in the colony — if you watch ants walking in a line, a few will stray from the path. This opens up opportunities for the colony, such as finding new sources of food. However, too much randomness will not allow the colony to self-organize, and can lead to chaos. These characteristics — as defined by autopoiesis and complexity theory — can be found in other complex systems, including cities, cultures, political systems and ecosystems. Theise says that they can also be seen in some simpler forms at levels smaller than the cell. He gives the example of the DNA helix, a type of biomolecule. The molecules that make up DNA allow electrons to flow along the helix, just as electricity flows through a wire. At certain points, there are “holes” where no electrons flow. These areas, which are located near genes, can trap potentially damaging ionizing radiation. Once captured, the energy of the ionizing radiation is transferred from the coding region to a non-coding region. As a result, mutations are more likely to occur in the non-coding regions, where they will have less of an effect on the organism. This is a simple example of the sensing, processing and responding that Varela and Maturana defined as being a characteristic of life, or sentience. Similar sensing, processing and responding occurs at the atomic level, with electrons or subatomic particles serving for the transfer of information. Even at the quantum level, simpler forms of this sentience exists. But at this level things are somewhat different than at the macro level. Every particle or string can be thought of as a closed unit, but they can also be described by wave functions that extend throughout the universe. As a result, complexity theory supports a panpsychism view of consciousness in the universe. In the video, Theise says that at the quantum level, “every tiny thing overlaps with every other tiny thing. There’s no longer inside and outside. It’s actually a kind of self-awareness, a self-sentience. So according to this model, sentience is pervasive throughout the universe.”14 Dec
The Love Song of Devi and Bhairava - I first met the Vijnana Bhairava Tantra in 1968, when I was 18 and working in a physiology lab at the University of California. The lab was doing research on meditation, and in a staff meeting one afternoon, one of the other assistants read a page from the first English translation of the text, done by Swami Lakshman Joo and Paul Reps. Upon first hearing a few sentences, I fell in love. The words sang to me, transmitted an electric awakening and thus began a love affair that has continued for 47 years now. The meditative practices discussed in the Vijnana Bhairava Tantra have been my guide and inspiration, and every day the beauty of the text excites me anew. About the Vijnana Bhairava Tantra Swami Lakshmanjoo This text appeared in Kashmir around AD 800, as far as we can tell. Before that, it may have been handed down through the oral tradition, which means that it was memorized and chanted for generations. In ancient texts such as the Rig Veda, the word tantra refers to the technology of weaving – ‘a loom, the warp’. There is the image of stretching or weaving threads in patterns across the framework of a loom. Metaphorically, a tantra is a tapestry of knowledge weaving together the threads of yoga technique. This tantra is a compendium of yoga meditation instructions, set as a conversation between lovers. Its focus is on full-body spirituality and accepting every breath, sensual experience, and emotion as doorways to deep and intimate contact with the energies of life. In the 20th Century, Swami Lakshmanjoo of Kashmir was a prime custodian of the text, and taught it to many disciples: Paul Reps, Lilian Silburn, Jaideva Singh, John and Denise Hughes, and Alexis Sanderson. The conversation begins with Devi, The Goddess, tenderly asking, “Beloved, tell me, how do I enter more deeply into the reality of the universe?” The Sanskrit of her first words are to me the most beautiful phrase I have ever heard, in any language: Shrutam deva maya sarvam rudra yamala sambhavam (śrutaṁ deva mayā sarvaṁ rudrayāmalasambhavam.) In reply, Bhairava describes 112 techniques for becoming enlightened through everyday life experience. Each of these techniques is a way of attending to the rhythms, pulsations, and sensuousness of the divine energy that we are made of and that flows through us always. As we engage with these meditation techniques, we are alerted to the presence of the sacred that permeates our bodies. All of these methods involve savoring the incredible intensity underlying the most common experiences. They work by activating the senses, and extending the range of the senses further into the inner and the outer world. The basic dynamics of life – breathing, falling asleep, waking up, walking, loving – are all used as gateways to alignment and enlightenment. The 112 practices described in the Vijnana Bhairava Tantra are vast in scope, playful, intimate, and full-bodied. There are techniques of pranayama (breath awareness), kundalini energy awakening, mudra (gestures), mantra (instrumental words or words of power and love), bhavana (creative meditations), and japa (repetition of prayers). There are also many informal practices mentioned. Again and again the text encourages us to throw our delightful awareness into intense experience (kṣip – ‘to throw, cast, send, dispatch, to move hastily, to throw a glance, to direct the thoughts upon.’) Any experience we long for or accidentally find ourselves in the middle of, can be utilized as a doorway into awakening. “Are you longing for sex? Come on in – this is a doorway. Do you live for dancing? That is also a doorway. Is music your passion? Go for it. Do you want to be alone in nature? That is a path. Are you stuck in a rainstorm in the middle of the night? Take this as an invitation to really wake up. Is partying with great food and drink and a circle of friends your greatest pleasure? This is a doorway to consciousness indeed. Are you angry or jealous? Those are stunning doorways. Are you terrified, running in panic, or exhausted? Those are also portals to awakening. Are you tired of yoga and meditation techniques? You can just do nothing, that is also a doorway.” The Vijnana Bhairava Tantra is framed as a conversation between the Goddess Who Is the Creative Power of the Universe and the God Who Is the Consciousness that Permeates Everywhere. For short, they call each other Devi and Bhairava, or Shakti and Shiva. They are lovers and inseparable partners, and one of their favorite places of dwelling is in the human heart. One of the meta-messages embedded in the text is The Goddess and her Beloved saying to us, “No matter where you are in human experience, whether you are ecstatic, lonely, lost, in pain, or in delight, we are right here inside your most intimate experience, join with us whenever you want. We are closer than your breath.” This Version of the Text Each of the 112 doorways is described in a verse of 32 syllables, set forth in a classic Sanskrit meter known as anustubh (anuṣṭubh – ‘to praise after, to follow in praising’, a kind of meter consisting of four padas or quarter-verses of eight syllables each.) Into each verse, which takes about 14 seconds to chant, are layers and layers of meaning, jokes, puns, and images that point to sensuous experience. The language of the text abounds in earthy humor and sexual innuendo. When Shiva describes the Yoga of Kissing in verse 70 the word he uses is lehana. Usually translated as ‘kissing’, the actual definition includes ‘the act of licking, tasting, or lapping with the tongue’. To lovers, licking is an utterly different word than kissing. When monks and nuns translate this word, of course they edit out the juiciness. When Devi uses the word, bindu, it means ‘a detached particle, drop, globule’, and ‘a mark made by the teeth of a lover on the lips of his mistress’. Everywhere in this text, the Sanskrit lexicon is used with superb skill to indicate nuances of meditative experience. Sanskrit is gloriously polysemous, (poly, ‘many’ + sema, ‘sign’). There are multiple layers of information reverberating in each word, and each layer evokes realms of wonder and awe. The language is coded as densely as if you said in English, “BB King, Clapton, Hendrix, Paige”. That is eight syllables, and each one or two  evokes a style, a set list, a series of legendary, era-defining performances by these adept guitar players, and the awakening that the music evoked in the listeners. Or think of these eight syllables: “Bach, Beethoven, Wagner, Mozart.” Worlds of revelatory beauty evoked in a few words. In each verse of the Vijnana Bhairava Tantra they were texting or tweeting to the future: “Here is the greatest thing I have ever learned. I have encoded it so there is not one extra syllable, one extraneous thought. It’s as perfect, polished, purified, consecrated – as saṃskṛta, as I know how to make it. My prayer is that this message makes it through to you intact, you who will be born in a distant time and place.” For example, in one of the verses, Shiva is describing a mantra practice, and uses the word pluta, ‘floated, floating or swimming in, bathed, overflowed, submerged, covered or filled with, protracted, prolated or lengthened, flown, leaping, a flood, deluge . . .’ This is an exquisite expression of the experience of meditating with a mantra, especially as attention shifts from verbal pronunciation to subvocal speech, then to the energy impulse of the sound as it dissolves into oceanic silence. You can obtain these sorts of descriptors by interviewing modern mantra yoga practitioners and asking them what they are experiencing. Those who know how to meditate deeply with sound in the way described in this verse often say, spontaneously, “I feel flooded by the mantra, floating with it, bathed in the sound.” The usual translation of pluta is simply ‘protracted’. While it is true that in certain stages of practice the mantra does become protracted – Aaaaaaaaaaaaaaa Uuuuuuuuuuuuu Mmmmmmmmmmmmmmmmmm, there are many other subtle sensory experiences that occur, and these are hinted at extensively in the rich metaphoric language of the text. Therefore I prefer to use as much of the full semantic range of each word as I can fit onto the page without cluttering up the flow. Sanskrit words are often defined in the Monier-Williams Sanskrit-English Dictionary with a series of five or ten images, and we can accept these as pointing to embodied sensory experience. One of my main techniques in creating the versions of the text that I call The Radiance Sutras is simply to use the full semantic field of each word. What I attempt to do is let as many of the images in the Sanskrit as possible, find their way into sentences in English, and in this way convey some of the humor and juiciness (rasa) of the original. Rasa, by the way, is a word with vast semantic range: it’s basic sense is ‘juice’ – of plants or fruit, and also‚ ‘the best or finest or prime part of anything, essence, marrow, liquor, drink, syrup, elixir, potion, nectar, semen, taste, flavor, love, affection, desire, charm, pleasure, delight’. Rasa is also aesthetic relish – ‘the taste or character of a work, the feeling or sentiment prevailing in a work of art’. Another polysemic Sanskrit word used in the text is nitya, often translated as ‘eternal’. For the first decade I was involved with this tantra, I was totally bored by the word nitya, because I was accepting this narrow range of meaning. ‘Eternal’ was just not an interesting concept to me. But I was delighted to discover the listing in the Monier-Williams, which is more personal: ‘innate, native, one’s own, continual, perpetual, eternal, constantly dwelling or engaged in, intent upon, devoted or used to, the sea, the ocean’. When I read that, I got it – ‘native of eternity. A denizen of the eternal, at home in the ocean of eternity’. Emerson said, “Every word was once a poem.” With Sanskrit, there is a sense that the words remember they were once cognitions, startling insights by a seer, and then they were sung as an expression of wonder. When I first began exploring the text as a teenager, friends would ask me to share with them what I was learning about how to meditate. I knew nothing about how to teach meditation, so I would ask them to describe to me their own natural meditative experiences. Then I would simply listen, and after half an hour or an hour, usually they would be speaking from inside their native experience. Then together we would browse through Lakshman Joo and Paul Reps’ translation, to see what verse reminded them of their own spontaneous experiences. In this way, I learned that virtually everyone has had some taste of the juice, some flirtation with the ecstatic practices the Vijnana Bhairava is singing about. When we would read the Lakshman Joo and Paul Reps translation in Zen Flesh, Zen Bones, or the translation of Lilian Silburn, or Jaideva Singh, there would be a flash of recognition, “Aha. Yes. I know that experience. It calls to me.” Over the past decades, it has been my great privilege and honor to have spent thousands of hours in this way, listening to meditators describe moment-by-moment what they are experiencing, as part of various research projects at the University of California. One surprising finding is that when meditators are in the midst of one of the doorways – one of the 112 practices mentioned in the text – the Sanskrit of the corresponding verse often feels familiar and intimate. For example, one of the meditations is to listen to stringed instruments, and the word used in the stanza is tantri (tántrī) – ‘the wire or string of a lute, the strings of the heart, any tubular vessel of the body, sinew, vein’. In this one word of two syllables, there is a wealth of metaphors pointing to the technology of stringing a thread or a string between a wooden frame, for weaving fabric, or weaving a tune, and also the thrilling experience of feeling one’s heart strings vibrating. Sanskrit is intrinsically poetic and evocative. I was surprised and a bit scared when the thought occurred to me to do a fresh version of the text, but it felt like a request. So I began to live with the text in a new way, wondering how you could say one of the stanzas in English. I began by reading the Sanskrit of the text over and over and meditating with the sound of the words as I slowly learned the full semantic field of each word. It can take a month to become saturated with just one stanza of 32 syllables, because the meaning of each word is so deep and vast. Gradually the text sang itself into form, and I would generate twenty to fifty versions of each stanza, seeking to let the aliveness of the Sanskrit sing itself into English. Tantras, in the words of Gavin Flood, are “not simply passive texts but are performative, used in life transforming practices” (The Tantric Body p. 4). To paraphrase Flood, the text is brought to life in the act of reading and inward performance, and in the act of sharing and outward performance. When we read, if we engage with the text, that is an inward performance. When we share with others the meanings we find, that matches our experiences, that is an outward ritual. This is a performance-oriented version of the Vijnana Bhairava Tantra. It is meant to inspire practice and is written in such a way that when reading it quietly to yourself, you may feel invited in to a practice, and become more intimate with yourself. You may find yourself in the midst of a practice just in reading. This is antar yoga, where antara is ‘interior, intimate, the interior part of a thing, Soul, heart, supreme soul’. Feel free to speak it out loud, share it with a friend or student, jump up and dance, jump in and do one or more of the practices. Sanskrit is a song of experience. Its words are rich in meaning, with many images and metaphors, both sensual and spiritual. The English used in The Radiance Sutras comes from the images and metaphors in the Sanskrit of the text, which is luscious, evocative, and electrifying. People are shocked at how much Shakti Sanskrit has. In this version I take the images in the definition of each word as yuktarūpaka – the ‘appropriate metaphor’. Each word sparkles with many images that suggest direct living experience. The feeling is of an acceptance of all human emotion, all of our longing, all of our yearning for sensuous and spiritual experience. The world evoked by Sanskrit is one in which poetry, dance, craft skills, archery, medicine, theater, literature, mythology – all are one field of exploration. All are part of the same seamless texture of experience. I refer to The Radiance Sutras as a ‘version’ rather than a ‘translation’ because I am following a different set of rules than those developed by Indologists and those engaged in historical criticism. I am seeking to let the liveliness of the original Sanskrit come across, using the root images and metaphors in each word. In the Monier-Williams Sanskrit-English Dictionary (1899), the definitions, elicited from native lexicographers during the latter half of the 19th century, are full of interesting metaphors and images that give direction and hints relevant to the yoga practice being presented. For example, consider these metaphor-packed dictionary listings, at your leisure (italics are mine, and the definitions are abbreviated): Laya – the act of sticking or clinging to, to become attached to any one, to disappear, be dissolved or absorbed, to hide or conceal one’s self. Lying down. Melting, dissolution, disappearance or absorption in. Rest, repose. Place of rest, residence, house, dwelling. Mental inactivity, spiritual indifference. Sport, diversion, merriness. Delight in anything. An embrace. In music – time, a kind of measure. The union of song, dance, and instrumental music. A pause. A swoon. Merge. Bhū – to become, be, arise, come into being, exist, be found, live, stay, abide. To cherish, animate, enliven, refresh, encourage, promote, further. To addict or devote oneself to, practice. To manifest, exhibit, show. Becoming, being, existing, springing, arising. The place of being, space, world, or universe. The earth, ground. Soil. Floor. Pavement. A spot or piece of ground. Prakāśa – visible, shining, bright. Clear, manifest, open, public. Pronouncing a name out loud. Expanded. Universally noted, famous, celebrated for. Openly, publicly, before the eyes of all. Clearness, brightness, splendor, luster, light. Elucidation, explanation, display. Manifestation, expansion, diffusion. Glory. Sunshine, open spot or air. The gloss on the upper part of a (horse’s) body. The messengers of Vishnu. Laughter. Yuj – to yoke or join or fasten or harness (horses or a chariot), to make ready, set to work, use, employ, to equip an army. To offer, perform (prayers, a sacrifice). To put arrows on a bow-string. To fix in, insert, inject (semen). To turn or direct or fix or concentrate the mind, thoughts upon. To concentrate the mind in order to obtain union with the Universal Spirit, be absorbed in meditation. To join, unite, connect, bring together (to be attached, cleave to). To join one’s self to. In astronomy, to come into union or conjunction with. To be united in marriage. To urge or impel to. To encompass, embrace. Exciting. Being in couples or pairs. Each word is a poem and wants to become a song. Each word may have a meaning in meditation, and also in music, dance, astrology, astronomy, alchemy, sex, prayers, and something to do with horses. You could make a song lyric out of a single definition. All the above definitions are shortened for ease of reading, and the full definitions contain lots of opposites, so if love is mentioned, and melting and merging, then so is dying. If surrender is mentioned, so is control and domination. It’s the stuff of modern pop songs, Country Western, musicals, Rap and Hip Hop. With such rich polysemy, hundreds of very different translations could be done. This is intentional. The Sanskrit of the text is constructed so that you could translate it afresh every day for a lifetime and continue to see new meaning. I often do thirty to forty different translations and let the text sing itself into something lively. This often takes several months in which I am doing nothing except chanting the Sanskrit of this brief text all day, from 4 in the morning until dawn, and then for several more hours before noon, and again in the afternoon. I ask the Sanskrit of the text to repeat itself like a prayer of the heart, and I listen to the pulsation and the music of it. I am usually standing and walking around while doing this – you can only go so far with a text like this if you are sitting at a desk. The text definitely wants to be danced to and to jump off the page into performance of some kind. A listener to the Vijnana Bhairava Tantra who was aware of the meanings of these words, would have a vast and ever-surprising range of associations flowing in her mind as she heard these being chanted, even for the thousandth time. There are plays of sound in the words themselves, and rhythms of vision, for if you know the images in each word, moving pictures flicker and flow continually as the Sanskrit flows, making a unique movie upon each hearing. Resonance reaches far and wide, setting whole fields of ideas and experiences vibrating, invoking myths, epics, conversations (Upanishads) and hilarious stories. In her initial questions that invoke the conversation, Devi uses the term para, which is defined as ‘the Universal Soul, far, distant, opposite, previous (in time), ancient, past, future, next, the name of a sound in the first of its four stages, the wider or more extended or remoter meaning of a word’. Therefore in decoding the conversation, we make use of the wider meanings of all the words, as indicated in the industry-standard dictionary. The real translation is when the words jump off the page onto your tongue and light up your eyes and your heart. And the translation continues when you are able to engage with your yoga practice with a bit more love and liveliness. The Heart of Space Let’s look at one of the stanzas, verse 32 of the text, and the 9th practice given by Bhairava. śikhipakṣaiś citrarūpair maṇḍalaiḥ śūnyapañcakam | dhyāyato’nuttare śūnye praveśo hṛdaye bhavet The Lakshman Joo and Paul Reps translation in Zen Flesh, Zen Bones reads like this: ‘Imagine the five-colored circles of the peacock tail to be your five senses in illimitable space. Now let their beauty melt within.  Similarly, at any point in space or on a wall until the point dissolves. Then your wish for another comes true.’ The Jaideva Singh translation in Vijnanabhairava or Divine Consciousness, A Treasury of 112 Types of Yoga (Delhi: Motilal Banarsidass, 1979) is this: ‘The yogi should meditate in his heart on the five voids of the five senses which are like the five voids appearing in the circles of motley feathers of peacocks. Thus he will be absorbed in the Absolute void.’ If we look at the definitions of each word, we see a rich cluster of improbable images, suggesting the possibility of offering a juicier version of this stanza. Sikhi, the first word of this verse, is listed in the Monier-Williams as ‘A peacock. A name of Indra. The god of love.’ Taking the hint, we see that the peacock is the national bird of India, a symbol of grace, pride, joy, and extravagant beauty; Indra is the god of the senses, and this makes sense because the teaching is about the senses. Indriya is ‘fit or agreeable to Indra, a companion of Indra, bodily power, power of the senses.’ ‘The god of love’ may refer to Krishna, who is depicted as having a peacock feather in his crown. The Hare Krishnas say peacocks are “evidence of the sublimely spiritual quality inherent within material beauty.” Another hint is that peacock feathers are sometimes used in Shaktipat, a transmission of divine energy, from the teacher to the student, in the tradition of this text. The goddess Saraswati is sometimes portrayed with a peacock at her side. Paksa (pakṣa) – a wing, pinion. A feather, the feathers on both sides of an arrow. The fin of a fish. The shoulder. The flank or side or the half of anything. The half of a lunar month. A limb or member of the body. In algebra, a primary division or the side of an equation in a primary division. The feathers of the tail of a peacock, a tail, purity, perfection. Looking into the appeal of peacock feathers, we find that they are prized all over the world for their beauty, and because peacocks shed their feathers every year, they can be obtained without harming the bird. Furthermore, peacock feathers are iridescent – the colors move as the observer moves. The iridescence in peacock feathers is not due to pigments, but is the manifestation of intricate tiny crystal structures that reflect some wavelengths and filter others. Searching further afield in the Monier-Williams, we see that there are 208 Sanskrit words with ‘peacock’ in the definition, implying that this visual image is important and embedded in many aspects of the lexicon, and mythology, and iconography. Cakra – wheel, the circle on a peacock’s tail; candra – shining, the moon, the eye in a peacock’s tail; jivatha – long-lived, life, breath, a peacock; and over two hundred others. The peacock is a vahana, one of the animals the Goddesses and Gods ride (vāhana – ‘the act of making effort, drawing, bearing, carrying, conveying, bringing, any vehicle or conveyance or draught-animal.’) Other vahanas include the bull, swan, buffalo, elephant, mouse, horse, dog, tiger, parrot, and so on.Citra – excellent, bright, clear, variegated, speckled, extraordinary appearance, the ether, sky, strange, wonderful, variety of color, picture, sketch; punning in the form of question and answer.Rupa – outward appearance, phenomenon, color, shape, figure, dreamy or phantom shapes, loveliness, grace, beauty, splendor, nature, character, peculiarity, image, reflection, mode, manner, way, trace of; a show, play, drama, a remark made under particular circumstances when the action is at its height; a sound, word.Mandala – circular, round, the path or orbit of a heavenly body, a halo around the sun or moon, a division or book of the Rig Veda.Sunya – empty, void, hollow, desolate, deserted, vacant, vacuum, desert, space, heaven, atmosphere.Pancha – five. Refers to the five voids and the source of the five senses: gandha, smell; rasa, taste; rupa, vision; sparsa, touch; shabda, hearing.Dhyana – meditation, thought, reflection.Anuttare – follow to the end; the Supreme, the Absolute. The Highest Reality, both transcendental and immanent. In the tradition of this text, anuttara refers to ‘The Supreme heart of Shiva’, and Anuttara Shakti is the pulsation, the spanda, of the Highest Creative Consciousness.Pravesha – entering, a place of entrance, a door, entrance on the stage, entrance of the sun into a sign of the zodiac, employment, use, income, intentness on an object, engaging closely in a pursuit or purpose, manner, method.Hrdaya – the heart (or region of the heart as the seat of feelings and sensations), soul, mind (as the center of mental operations); the heart or interior of the body, the heart or center or core or essence or best or dearest or most secret part of anything. True or divine knowledge, the veda. Science.Bhavet – becomes, representing a possibility, a hoped-for state, a potential, ‘It could become’. Bhava is ‘becoming, being, existing, turning or transition into, true condition, reality, manner of being, temperament, any state of mind or body, way of thinking or feeling, sentiment, intention, love, affection, attachment; the seat of the feelings or affections, heart, soul, mind; wanton sport, dalliance.’ Whew. Reading this glossary, even for the five hundredth time, just flattens me. How could anyone possibly pack so much meaning into thirty-two syllables, thirteen or fourteen seconds of chanting? Oversimplified, the instruction is “meditate on the five voids — the five voids that are the ultimate sources of the five senses. Follow them to the end, and enter the heart.” Reverse-engineering the sentence, we could say, “The Supreme Heart of I AM consciousness is right here, in the heavenly space inside of your senses.” There are thousands of techniques here. Explore each and every one of your senses — hearing, touch, taste, smell, vision — individually and in combination. Find the hrdya, ‘that which captivates your heart’, in each sense, to make it interesting. Learn to follow each sense beyond itself into shunya – space, heaven. Whatever your practice — pranayama, asana, mantra, visualizations, meditating on the chakras — follow your sensuous experience into the beyond, and be at home in the Heart of Space. An iridescent, shimmering beauty appears when you are perceiving spaciousness simultaneously with the sensuous, the transcendent with the immanent.Hrdya is also the name of ‘an intoxicating drink made from honey or the blossoms of Bassia Latifolia,’ (an Indian tropical tree.) This refers to the nourishing, delicious quality that arises in us as we practice gratefulness. The nectar of gratitude sustains us on our journey. There is also a blessing and a warning: hrdya is a divine drunken quality that comes from living in the heart, a sober intoxication that emerges from seeing eternity shining through this fleeting moment. This sutra is saying, “You want initiation? You are longing to experience the Grace of God? The feather of initiation is always touching your forehead. God is always here, inside your moment-by-moment experience, inviting you to receive the transmission of divine electricity that awakens you to live in your essence.” Taking all of these hints, metaphors, and images, and playing with them, we could say: The senses declare an outrageous world — Sounds and scents, ravishing colors and shapes, Ever-changing skies, iridescent reflections — All these beautiful surfaces Decorating vibrant emptiness. The god of love is courting you. Every perception is an invitation into revelation. Hearing, seeing, smelling, tasting, touching — Ways of knowing creation, Transmissions of electric realization. The deepest reality is always right here. Encircled by splendor, In the center of the sphere, Meditate where the body thrills To currents of intimate communion. Follow your senses to the end and beyond — Into the heart of space. This article first appeared in The Sutra Journal by Dr. Lorin Roche http://www.sutrajournal.com/radiance-sutras-where-body-meets-infinity-by-lorin-roche 13 Dec
Republican Attorneys General eager to dismantle Obama’s climate agenda  - Credit: CO2science.comHow do you dismantle an agenda? We’re about to find out in the case of US climate rules and regulations that appeared in the Obama years. The Clean Power Plan looks doomed. Maybe CO2 won’t be called a pollutant any more? H/T GWPF As soon as President-elect Donald Trump assumes office Jan. 20, Republican attorneys general who have spent the past eight years battling the Obama administration’s climate change agenda will have a new role: supporting the Republican president’s complex legal effort to roll back that agenda, reports The Washington Post. By contrast, states with Democratic leadership — such as California, where Gov. Jerry Brown has promised all-out war against Mr. Trump on global warming — will go from being environmental partners with the federal government to legal aggressors on their own. Republicans have begun exercising their influence over the incoming president and his pick to lead the Environmental Protection Agency, Oklahoma Attorney General Scott Pruitt, who has built a political career by battling the very agency he is about to lead. Earlier this month, 24 attorneys general signed an open letter laying out how the Trump administration could begin to dismantle President Obama’s global warming agenda. The effort was led by West Virginia Attorney General Patrick Morrisey, a Republican who often partnered with Mr. Pruitt in bringing lawsuits against what they said was EPA overreach The letter focuses on the EPA’s Clean Power Plan, a proposal to limit carbon emissions from power plants that requires all states to meet strict pollution guidelines laid out by the federal government. Federal data show the plan would drive up electricity prices. The Supreme Court this year issued a stay halting implementation of the Clean Power Plan, but Republican attorneys general are eager for the proposal to be formally taken off the books. “The incoming administration and Congress now have the opportunity to withdraw this unlawful rule and prevent adoption of a similar rule in the future,” the attorneys general wrote. “An executive order on Day One is critical. The order should explain that it is the administration’s view that the rule is unlawful and that EPA lacks authority to enforce it. The executive order is necessary to send an immediate and strong message to states and regulated entities that the administration will not enforce the rule.” — Some Democrat-led states are likely to continue implementing emissions reduction programs and are poised to become the EPA’s legal adversaries over the next four years. They will assume the job held by Mr. Pruitt’s Oklahoma and Mr. Morrisey’s West Virginia, completing a full role reversal. Source: Republican Attorneys General Eager To Dismantle Obama’s Climate Agenda | The Global Warming Policy Forum (GWPF) 11:15
France unveils the world’s first solar panel road - Solar panel road [image credit: Wattway]Five million Euros to power a few street lights sounds expensive. What effect traffic has on the panels remains to be seen, but dirt could be an issue. A solar panel road, claimed to be the world’s first, has opened in France, reports the Daily Mail Online. The 0.6 miles (1km) stretch of road in the small Normandy village of Tourouvre-au-Perche is paved with 2,880 solar panels, which convert energy from the sun into electricity. It is hoped that the the road could eventually provide enough energy to power the small village’s street lights. The ‘Wattway’ road features 2,800 sq m (9,186 sq ft) of panels and was showcased today at an inauguration ceremony attended by French minister for Ecology, Sustainable Development and Energy Ségolène Royal. The road is expected to produce 280 MWh of electricity a year. While the daily production will fluctuate according to weather and seasons, it is expected to reach 767 kWh per day, with peaks up to 1,500 kWh per day in summer. Some 2,000 motorists will use the RD5 road every day during a two-year test period. During that time, assessments will be made as to whether the road is capable of generating enough power to run the village’s street lights. Tourouvre-au-Perchef is home to around 3,400 residents. The project is said to have cost €5m (£4.2m/$5.1) and was financed by the French government. Source: France unveils the world’s first solar panel road: 0.6-mile stretch could provide enough energy to power entire village’s street lights | Daily Mail Online 22 Dec
Mystery of ‘Alien Megastructure’ star testing astronomers’ creativity - Aliens might be the ‘Hollywood solution’ but those tend to be fictional. On the other hand, plausible explanations are elusive. Astronomers may have to think a little harder to solve the mystery of Boyajian’s star reports Space.com. In September 2015, Yale University’s Tabetha Boyajian and her colleagues reported that the star KIC 8462852 has dimmed dramatically multiple times over the past seven years, once by an astounding 22 percent.  NASA’s planet-hunting Kepler space telescope spotted these dimming events. But the brightness dips of “Boyajian’s star,” as it has come to be known, were far too significant to be caused by an orbiting planet, so astronomers began thinking of alternative explanations. Researchers have come up with many possible causes for the dimming, including a swarm of broken-apart comet fragments, variability in the activity of the star itself, a cloud of some sort in the interstellar medium between Kepler and Boyajian’s star, and, most famously, an orbiting “megastructure” built by an alien civilization to collect stellar energy. Researchers are testing these hypotheses to the extent possible. — The mystery has only deepened since Boyajian and colleagues’ September 2015 paper. Early last year, for example, astronomer Bradley Schaefer of Louisiana State University determined that, in addition to the periodic brightness dips, the star dimmed overall by about 20 percent between 1890 and 1989. This result was bolstered by another 2016 study, which found that Boyajian’s star dimmed by about 3 percent between 2009 and 2013. Wright has said that the interstellar-cloud explanation seems the most likely of the proffered hypotheses. But he’s not betting on it. “That would have to be some crazy interstellar cloud,” he told Space.com here last week at the annual fall meeting of the American Geophysical Union. Researchers may have to dig deeper to figure out exactly what’s causing the strange dimming of Boyajian’s star, Wright said. “I think it’s very likely that we haven’t heard the right answer yet — that I haven’t heard the right answer yet, anyway,” he said. Full report: Mystery of ‘Alien Megastructure’ Star Testing Astronomers’ Creativity | Space.com 21 Dec
November 2016 Global Surface (Land+Ocean) and Lower Troposphere Temperature Anomaly Update - This post provides updates of the values for the three primary suppliers of global land+ocean surface temperature reconstructions—GISS through November 2016 and HADCRUT4 and NOAA NCEI (formerly NOAA NCDC) through October 2016—and of the two suppliers of satellite-based lower troposphere temperature composites (RSS and UAH) through November 2016. It also includes a few model-data comparisons. This is simply an update, but it includes a good amount of background information for those new to the datasets. Because it is an update, there is no overview or summary for this post. There are, however, simple monthly summaries for the individual datasets. So for those familiar with the datasets, simply fast-forward to the graphs and read the summaries under the headings of “Update”.   INITIAL NOTES: We discussed and illustrated the impacts of the adjustments to surface temperature data in the posts: Do the Adjustments to Sea Surface Temperature Data Lower the Global Warming Rate? UPDATED: Do the Adjustments to Land Surface Temperature Data Increase the Reported Global Warming Rate? Do the Adjustments to the Global Land+Ocean Surface Temperature Data Always Decrease the Reported Global Warming Rate? The NOAA NCEI product is the new global land+ocean surface reconstruction with the manufactured warming presented in Karl et al. (2015). For summaries of the oddities found in the new NOAA ERSST.v4 “pause-buster” sea surface temperature data see the posts: The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014 Even though the changes to the ERSST reconstruction since 1998 cannot be justified by the night marine air temperature product that was used as a reference for bias adjustments (See comparison graph here), and even though NOAA appears to have manipulated the parameters (tuning knobs) in their sea surface temperature model to produce high warming rates (See the post here), GISS also switched to the new “pause-buster” NCEI ERSST.v4 sea surface temperature reconstruction with their July 2015 update. The UKMO also recently made adjustments to their HadCRUT4 product, but they are minor compared to the GISS and NCEI adjustments. We’re using the UAH lower troposphere temperature anomalies Release 6.0 for this post as the paper that documents it has been accepted for publication. And for those who wish to whine about my portrayals of the changes to the UAH and to the GISS and NCEI products, see the post here. The GISS LOTI surface temperature reconstruction and the two lower troposphere temperature composites are for the most recent month. The HADCRUT4 and NCEI products lag one month. Much of the following text is boilerplate that has been updated for all products. The boilerplate is intended for those new to the presentation of global surface temperature anomalies. Most of the graphs in the update start in 1979. That’s a commonly used start year for global temperature products because many of the satellite-based temperature composites start then. We discussed why the three suppliers of surface temperature products use different base years for anomalies in chapter 1.25 – Many, But Not All, Climate Metrics Are Presented in Anomaly and in Absolute Forms of my free ebook On Global Warming and the Illusion of Control – Part 1 (25MB). Since the July 2015 update, we’re using the UKMO’s HadCRUT4 reconstruction for the model-data comparisons using 61-month filters. And I’ve resurrected the model-data 30-year trend comparison using the GISS Land-Ocean Temperature Index (LOTI) data. For a continued change of pace, let’s start with the lower troposphere temperature data. I’ve left the illustration numbering as it was in the past when we began with the surface-based data. UAH LOWER TROPOSPHERE TEMPERATURE ANOMALY COMPOSITE (UAH TLT) Special sensors (microwave sounding units) aboard satellites have orbited the Earth since the late 1970s, allowing scientists to calculate the temperatures of the atmosphere at various heights above sea level (lower troposphere, mid troposphere, tropopause and lower stratosphere). The atmospheric temperature values are calculated from a series of satellites with overlapping operation periods, not from a single satellite. Because the atmospheric temperature products rely on numerous satellites, they are known as composites. The level nearest to the surface of the Earth is the lower troposphere. The lower troposphere temperature composite include the altitudes of zero to about 12,500 meters, but are most heavily weighted to the altitudes of less than 3000 meters. See the left-hand cell of the illustration here. The monthly UAH lower troposphere temperature composite is the product of the Earth System Science Center of the University of Alabama in Huntsville (UAH). UAH provides the lower troposphere temperature anomalies broken down into numerous subsets. See the webpage here. The UAH lower troposphere temperature composite are supported by Christy et al. (2000) MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons.  Additionally, Dr. Roy Spencer of UAH presents at his blog the monthly UAH TLT anomaly updates a few days before the release at the UAH website. Those posts are also regularly cross posted at WattsUpWithThat. UAH uses the base years of 1981-2010 for anomalies. The UAH lower troposphere temperature product is for the latitudes of 85S to 85N, which represent more than 99% of the surface of the globe. The UAH lower troposphere data are now at Release 6. The paper that supports the latest release has been accepted for publication (no date yet set for publication), and the Release 6 data are no longer being published with a “beta” identifier. See Dr. Roy Spencer’s post here. Those Release 6.0 enhancements lowered the warming rates of their lower troposphere temperature anomalies. See Dr. Spencer’s blog post Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade and my blog post New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years. The UAH lower troposphere anomaly data, Release 6.0, through November 2016 are here. Update: The November 2016 UAH (Release 6.0) lower troposphere temperature anomaly is +0.45 deg C.  It rose slightly since October (an increase of about +0.04 deg C). Figure 4 – UAH Lower Troposphere Temperature (TLT) Anomaly Composite – Release 6.0 RSS LOWER TROPOSPHERE TEMPERATURE ANOMALY COMPOSITE (RSS TLT) Like the UAH lower troposphere temperature product, Remote Sensing Systems (RSS) calculates lower troposphere temperature anomalies from microwave sounding units aboard a series of NOAA satellites. RSS describes their product at the Upper Air Temperature webpage.  The RSS product is supported by Mears and Wentz (2009) Construction of the Remote Sensing Systems V3.2 Atmospheric Temperature Records from the MSU and AMSU Microwave Sounders. RSS also presents their lower troposphere temperature composite in various subsets. The land+ocean TLT values are here. Curiously, on that webpage, RSS lists the composite as extending from 82.5S to 82.5N, while on their Upper Air Temperature webpage linked above, they state: We do not provide monthly means poleward of 82.5 degrees (or south of 70S for TLT) due to difficulties in merging measurements in these regions.  Also see the RSS MSU & AMSU Time Series Trend Browse Tool. RSS uses the base years of 1979 to 1998 for anomalies. Note: RSS recently release new versions of the mid-troposphere temperature (TMT) and lower stratosphere temperature (TLS) products.  So far, their lower troposphere temperature product has not been updated to this new version. Update: The November 2016 RSS lower troposphere temperature anomaly is +0.39 deg C.  It rose slightly (an uptick of +0.04 deg C) since October 2016. Figure 5 – RSS Lower Troposphere Temperature (TLT) Anomalies GISS LAND OCEAN TEMPERATURE INDEX (LOTI) Introduction: The GISS Land Ocean Temperature Index (LOTI) reconstruction is a product of the Goddard Institute for Space Studies. Starting with the June 2015 update, GISS LOTI uses the new NOAA Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4), the pause-buster reconstruction, which also infills grids without temperature samples. For land surfaces, GISS adjusts GHCN and other land surface temperature products via a number of methods and infills areas without temperature samples using 1200km smoothing. Refer to the GISS description here.   Unlike the UK Met Office and NCEI products, GISS masks sea surface temperature data at the poles, anywhere seasonal sea ice has existed, and they extend land surface temperature data out over the oceans in those locations, regardless of whether or not sea surface temperature observations for the polar oceans are available that month.  Refer to the discussions here and here. GISS uses the base years of 1951-1980 as the reference period for anomalies. The values for the GISS product are found here. (I archived the former version here at the WaybackMachine.) Update: The November 2016 GISS global temperature anomaly is +0.95 deg C. According to the GISS LOTI data, global surface temperature anomalies made an uptick in November, a +0.07 deg C increase. Figure 1 – GISS Land-Ocean Temperature Index NCEI GLOBAL SURFACE TEMPERATURE ANOMALIES (LAGS ONE MONTH) NOTE: The NCEI only produces the product with the manufactured-warming adjustments presented in the paper Karl et al. (2015). As far as I know, the former version of the reconstruction is no longer available online. For more information on those curious NOAA adjustments, see the posts: NOAA/NCDC’s new ‘pause-buster’ paper: a laughable attempt to create warming by adjusting past data More Curiosities about NOAA’s New “Pause Busting” Sea Surface Temperature Dataset Open Letter to Tom Karl of NOAA/NCEI Regarding “Hiatus Busting” Paper NOAA Releases New Pause-Buster Global Surface Temperature Data and Immediately Claims Record-High Temps for June 2015 – What a Surprise! And recently: Pause Buster SST Data: Has NOAA Adjusted Away a Relationship between NMAT and SST that the Consensus of CMIP5 Climate Models Indicate Should Exist? The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014 Introduction: The NOAA Global (Land and Ocean) Surface Temperature Anomaly reconstruction is the product of the National Centers for Environmental Information (NCEI), which was formerly known as the National Climatic Data Center (NCDC). NCEI merges their new “pause buster” Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4) with the new Global Historical Climatology Network-Monthly (GHCN-M) version 3.3.0 for land surface air temperatures. The ERSST.v4 sea surface temperature reconstruction infills grids without temperature samples in a given month. NCEI also infills land surface grids using statistical methods, but they do not infill over the polar oceans when sea ice exists. When sea ice exists, NCEI leave a polar ocean grid blank. The source of the NCEI values is through their Global Surface Temperature Anomalies webpage. Click on the link to Anomalies and Index Data.) Update (Lags One Month): The October 2016 NCEI global land plus sea surface temperature anomaly was +0.73 deg C.  See Figure 2. It made a very noticeable downtick (a decrease of about -0.14 deg C) since September 2016. Figure 2 – NCEI Global (Land and Ocean) Surface Temperature Anomalies UK MET OFFICE HADCRUT4 (LAGS ONE MONTH) Introduction: The UK Met Office HADCRUT4 reconstruction merges CRUTEM4 land-surface air temperature product and the HadSST3 sea-surface  temperature (SST) reconstruction. CRUTEM4 is the product of the combined efforts of the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia. And HadSST3 is a product of the Hadley Centre. Unlike the GISS and NCEI reconstructions, grids without temperature samples for a given month are not infilled in the HADCRUT4 product.  That is, if a 5-deg latitude by 5-deg longitude grid does not have a temperature anomaly value in a given month, it is left blank. Blank grids are indirectly assigned the average values for their respective hemispheres before the hemispheric values are merged.  The HADCRUT4 reconstruction is described in the Morice et al (2012) paper here. The CRUTEM4 product is described in Jones et al (2012) here. And the HadSST3 reconstruction is presented in the 2-part Kennedy et al (2012) paper here and here. The UKMO uses the base years of 1961-1990 for anomalies.  The monthly values of the HADCRUT4 product can be found here. Update (Lags One Month): The October 2016 HADCRUT4 global temperature anomaly is +0.59 deg C. See Figure 3.  It also made a very noticeable downtick from September to October 2016, a decrease of about -0.13 deg C. Figure 3 – HADCRUT4 COMPARISONS The GISS, HADCRUT4 and NCEI global surface temperature anomalies and the RSS and UAH lower troposphere temperature anomalies are compared in the next three time-series graphs. Figure 6 compares the five global temperature anomaly products starting in 1979. Again, due to the timing of this post, the HADCRUT4 and NCEI updates lag the UAH, RSS, and GISS products by a month. For those wanting a closer look at the more recent wiggles and trends, Figure 7 starts in 1998, which was the start year used by von Storch et al (2013) Can climate models explain the recent stagnation in global warming? They, of course, found that the CMIP3 (IPCC AR4) and CMIP5 (IPCC AR5) models could NOT explain the recent slowdown in warming, but that was before NOAA manufactured warming with their new ERSST.v4 reconstruction…and before the strong El Niño of 2015/16. Figure 8 starts in 2001, which was the year Kevin Trenberth chose for the start of the warming slowdown in his RMS article Has Global Warming Stalled? Because the suppliers all use different base years for calculating anomalies, I’ve referenced them to a common 30-year period: 1981 to 2010. Referring to their discussion under FAQ 9 here, according to NOAA: This period is used in order to comply with a recommended World Meteorological Organization (WMO) Policy, which suggests using the latest decade for the 30-year average. The impacts of the unjustifiable, excessive adjustments to the ERSST.v4 reconstruction are visible in the two shorter-term comparisons, Figures 7 and 8. That is, the short-term warming rates of the new NCEI and GISS reconstructions are noticeably higher than the HADCRUT4 data. See the June 2015 update for the trends before the adjustments. Figure 6 – Comparison Starting in 1979 ##### Figure 7 – Comparison Starting in 1998 ##### 15 Dec
November 2016 Sea Surface Temperature (SST) Anomaly Update - MONTHLY SEA SURFACE TEMPERATURE ANOMALY MAP The following is a Global map of Reynolds OI.v2 Sea Surface Temperature (SST) anomalies for November 2016. It was downloaded from the KNMI Climate Explorer. The contour range was set to -2.5 to +2.5 deg C and the anomalies are referenced to the WMO-preferred period of 1981-2010 for short-term data. November 2016 Sea Surface Temperature (SST) Anomalies Map (Global SST Anomaly = +0.27 deg C) MONTHLY GLOBAL OVERVIEW The global Sea Surface Temperature anomaly for November 2016 shows a noticeable decline since October. A sizeable downtick in the Northern Hemisphere (-0.10 deg C) was suppressed by a lesser downtick in the Southern Hemisphere (-0.03 deg C). The noticeable drop in the Northern Hemisphere was driven by the North Pacific, which showed a sizeable decline (-0.21 deg C)…a response to the large ribbon of below normal sea surface temperature anomalies stretching across the extratropical North Pacific. (See the post Something to Keep an Eye On – The Large Blue Ribbon of Below-Normal Sea Surface Temperatures in the North Pacific.) Monthly sea surface temperature anomalies for the NINO3.4 show weakening La Niña conditions. The monthly Global Sea Surface Temperature anomalies are presently at +0.27 deg C, referenced to the WMO-preferred base years of 1981 to 2010. (1)Global Sea Surface Temperature Anomalies Monthly Change = -0.06 deg C THE EQUATORIAL PACIFIC The monthly NINO3.4 Sea Surface Temperature anomalies for November 2016 have weakened and are rapidly approaching the threshold of ENSO-neutral conditions (not El Niño, not La Niña).  They were at -0.54 deg C, an increase of about +0.21 deg C since October.  (Also see the Weekly data shown near the end of the post.) (2) NINO3.4 Sea Surface Temperature Anomalies (5S-5N, 170W-120W) Monthly Change = +0.21 deg C #################################### The sea surface temperature anomalies for the NINO3.4 region in the east-central equatorial Pacific (5S-5N, 170E-120E) are a commonly used index for the strength, frequency and duration of El Niño and La Nina events. We keep an eye on the sea surface temperatures there because El Niño and La Niña events are the primary cause of the yearly variations in global sea surface temperatures AND they are the primary cause of the long-term warming of global sea surface temperatures over the past 30+ years.   See the discussion of the East Pacific versus the Rest-of-the-World that follows.  We present NINO3.4 sea surface temperature anomalies in monthly and weekly formats in these updates. Also see the weekly values toward the end of the post. INITIAL NOTES Note 1: I’ve downloaded the Reynolds OI.v2 data from the KNMI Climate Explorer, using the base years of 1981-2010.  The updated base years help to reduce the seasonal components in the ocean-basin subsets—they don’t eliminate those seasonal components, but they reduce them. Note 2: We discussed the reasons for the elevated sea surface temperatures in 2014 in the post On The Recent Record-High Global Sea Surface Temperatures – The Wheres and Whys. For 2015, The Blob and the El Niño are responsible for the noticeable increases.  See General Discussion 3 – On the Reported Record High Global Surface Temperatures in 2015 – And Will Those Claims Continue in 2016? in my ebook On Global Warming and the Illusion of Control – Part 1. Due to the lag in responses globally, the 2014/15/16 El Niño is still responsible for the recently elevated global sea surface temperatures in 2016. Note 3: I’ve moved the model-data comparison to the end of the post. Note 4: I recently added a graph of the sea surface temperature anomalies for The Blob in the eastern extratropical North Pacific. It also is toward the end of the post. It will be removed after the November 2016 post. Note 5: The sea surface temperature data in this post is the original (weekly/monthly, 1-deg spatial resolution) version of NOAA’s Optimum Interpolation (OI) Sea Surface Temperature (SST) v2 (aka Reynolds OI.v2)…not the (over-inflated, out-of-the-ballpark, extremely high warming rate) high-resolution, daily version of NOAA’s Reynolds OI.v2 data, which we illustrated and discussed in the recent post On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014. THE EAST PACIFIC VERSUS THE REST OF THE WORLD NOTE: This section of the updates has been revised. We discussed the reasons for the changes in the post Changes to the Monthly Sea Surface Temperature Anomaly Updates. For years, we have shown and discussed that the surfaces of the global oceans have not warmed uniformly during the satellite era of sea surface temperature composite. In fact, some portions of the global oceans have cooled during that 3+ decade period.   One simply has to look at a trend map for the period of 1982 to 2013 to see where the ocean surfaces had warmed and where they had not. Yet the climate science community has not addressed this.  See the post Maybe the IPCC’s Modelers Should Try to Simulate Earth’s Oceans. The North Atlantic (anomalies illustrated later in the post) has had the greatest warming over the past 3+ decades, but the reason for this is widely known. The North Atlantic has an additional mode of natural variability called the Atlantic Multidecadal Oscillation.  If you’re not familiar with the Atlantic Multidecadal Oscillation see the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO) webpage and the posts An Introduction To ENSO, AMO, and PDO — Part 2 and Multidecadal Variations and Sea Surface Temperature Reconstructions. As a result of the Atlantic Multidecadal Oscillation, the surface of the North Atlantic warmed at a rate that was more than twice the rate of the surface of the rest of the global oceans.  See the trend comparison graph here. The East Pacific Ocean also stands out in the trend map linked above. Some portions of its surfaces warmed and others cooled.  It comes as no surprise then that the linear trend of the East Pacific (90S-90N, 180-80W) Sea Surface Temperature anomalies is so low since the start of the Reynolds OI.v2 composite through 2013. (See the graph here from the December 2013 update.) With the strong 2015/16 El Nino conditions in the eastern tropical Pacific, and with The Blob in 2013, 2014 and 2015, it has acquired a slight positive trend, but it’s still far below the approximate +0.15 deg C/decade warming rate predicted by the CMIP5 climate models. Please see Figure 19 in the post Maybe the IPCC’s Modelers Should Try to Simulate Earth’s Oceans. (Note that the region also includes portions of the Arctic and Southern Oceans.) That is, there has been little warming of the sea surfaces of the East Pacific (from pole to pole) in 3-plus decades. The East Pacific is not a small region. It represents about 33% of the surface area of the global oceans. Notice how there appears to have been a strong El Niño event in 2014 in the East Pacific values, while there had only been a small event that year, and how the strong El Niño in 2015/16 caused a further rise. Note also how there appears to have been a shift in 2013. Refer again to the post On The Recent Record-High Global Sea Surface Temperatures – The Wheres and Whys. (3) East Pacific Sea Surface Temperature (SST) Anomalies (90S-90N, 180-80W) #################################### That leaves the largest region of the trend map, which includes the South Atlantic, the Indian and West Pacific Oceans, with the corresponding portions of the Arctic and Southern Oceans. Sea surface temperatures there warmed in very clear steps, in response to the significant 1986/87/88 and 1997/98 El Niño/La Niña events.  It also appears as though the sea surface temperature anomalies of this subset have made another upward shift in response to the 2009/10 El Niño and 2010/11 La Niña events.  I further described the ENSO-related processes that cause these upward steps in the recent post Answer to the Question Posed at Climate Etc.: By What Mechanism Does an El Niño Contribute to Global Warming? As you’ll note, the values for the South Atlantic, Indian and West Pacific Oceans appear now to be responding to the 2014/15/16 El Nino. And it appears have seen another El Niño-related uptick. (4) Sea Surface Temperature Anomalies of The South Atlantic-Indian-West Pacific Oceans (Weighted Average of 0-90N, 40E-180 @ 27.9% And 90S-0, 80W-180 @72.1%) #################################### NOTE: I have updated the above illustration and following discussion, because NOAA has recently revised their Oceanic NINO Index…once again.  They’ve used the base years of 1986-2015 for the most recent data, which has resurrected the 2014/15 El Niño. The periods used for the average temperature anomalies for the South Atlantic-Indian-West Pacific subset between the significant El Niño events of 1982/83, 1986/87/88, 1997/98, 2009/10 and 2015/16 are determined as follows. Using the most recent NOAA Oceanic Nino Index (ONI) for the official months of those El Niño events, I shifted (lagged) those El Niño periods by six months to accommodate the lag between NINO3.4 SST anomalies and the response of the South Atlantic-Indian-West Pacific Oceans, then deleted the South Atlantic-Indian-West Pacific values that correspond to those significant El Niño events.  I then averaged the South Atlantic-Indian-West Pacific Oceans sea surface temperature anomalies between those El Niño-related gaps. You’ll note I’ve ended the updates for the period after the 2009-10 El Niño. That was done to accommodate the expected response to the 2015/16 El Niño. The Sea Surface Temperature anomalies of the East Pacific Ocean, or approximately 33% of the surface area of the global oceans, have shown comparatively little long-term warming since 1982 based on the linear trend. And between upward shifts, the Sea Surface Temperature anomalies for the South Atlantic-Indian-West Pacific subset (about 52.5% of the global ocean surface area) remain relatively flat, though they actually cool slightly. Anthropogenic forcings are said to be responsible for most of the rise in global surface temperatures over this period, but the Sea Surface Temperature anomaly graphs of those regions discussed above prompt a two-part question: Since 1982, what anthropogenic global warming processes would overlook the sea surface temperatures of 33% of the global oceans and have an impact on the other 52% but only during the months of the significant El Niño events of 1986/87/88, 1997/98 and 2009/10? They were also discussed in great detail in my recently published book Who Turned on the Heat? The Unsuspected Global Warming Culprit, El Niño-Southern Oscillation. See the blog post Everything You Every Wanted to Know about El Niño and La Niña… for an overview. It’s now free.  Click here for a copy. STANDARD NOTE ABOUT THE REYNOLDS OI.V2 COMPOSITE The MONTHLY graphs illustrate raw monthly OI.v2 sea surface temperature anomalies from November 1981 to November 2016, as it is presented by the KNMI Climate Explorer. While NOAA uses the base years of 1971-2000 for this product, those base years cannot be used at the KNMI Climate Explorer because they extend before the start year of the product. (NOAA had created a special climatology for the Reynolds OI.v2 product.) I’ve referenced the anomalies to the period of 1981 to 2010, which is actually 1982 to 2010 for most months. MONTHLY INDIVIDUAL OCEAN AND HEMISPHERIC SEA SURFACE TEMPERATURE UPDATES (5) Northern Hemisphere Sea Surface Temperature (SST) Anomalies Monthly Change = -0.10 deg C #################################### (6) Southern Hemisphere Sea Surface Temperature (SST) Anomalies Monthly Change = -0.03 deg C #################################### (7) North Atlantic Sea Surface Temperature (SST) Anomalies (0 to 70N, 80W to 0) Monthly Change = +0.12 deg C #################################### (8) South Atlantic Sea Surface Temperature (SST) Anomalies (0 to 60S, 70W to 20E) Monthly Change = -0.06 deg C ####################################  (9) Pacific Sea Surface Temperature (SST) Anomalies (60S to 65N, 120E to 80W) Monthly Change = -0.13 Deg C #################################### (10) North Pacific Sea Surface Temperature (SST) Anomalies (0 to 65N, 100E to 90W) Monthly Change = -0.21 Deg C ####################################  (11) South Pacific Sea Surface Temperature (SST) Anomalies (0 to 60S, 120E to 70W) Monthly Change = -0.03 deg C #################################### (12) Indian Ocean Sea Surface Temperature (SST) Anomalies (60S to 30N, 20E to 120E) Monthly Change = +0.01 deg C #################################### (13) Arctic Ocean Sea Surface Temperature (SST) Anomalies (65N to 90N) Monthly Change = -0.17 deg C #################################### (14) Southern Ocean Sea Surface Temperature (SST) Anomalies (90S-60S) Monthly Change = +0.10 deg C #################################### WEEKLY SEA SURFACE TEMPERATURE ANOMALIES Weekly NINO3.4 sea surface temperature anomalies are at -0.4 deg C, which is above (“warmer” than) the threshold of La Niña conditions. That is, NINO3.4 region sea surface temperature anomalies are now in ENSO-neutral conditions (not El Niño, not La Niña). (15) Weekly NINO3.4 Sea Surface Temperature (SST) Anomalies #################################### MODEL-DATA COMPARISON:  To counter the nonsensical “Just what AGW predicts” rantings of alarmists about the “record-high” global sea surface temperatures in 2014 and 2015, I’ve added a model-data comparison of satellite-era global sea surface temperatures to these monthly updates. See the example below.  The models are represented the multi-model ensemble-member mean of the climate models stored in the CMIP5 archive, which was used by the IPCC for their 5th Assessment Report. For further information on the use of the model mean, see the post here. For most models, historic forcings run through 2005 (2012 for others) and the middle-of-the-road RCP6.0 forcings are used after in this comparison. The data are represented by NOAA’s Optimum Interpolation Sea Surface Temperature data, version 2—a.k.a. Reynolds OI.v2—which is NOAA’s best.   The model outputs and data have been shifted so that their trend lines begin at “zero” anomaly for the (November, 1981) start month of this composite. That “zeroing” helps to highlight how poorly the models simulate the warming of the ocean surfaces…noticeably higher than the observed warming rate.  Both the Reynolds OI.v2 values and the model outputs of their simulations of sea surface temperature (TOS) are available to the public at the KNMI Climate Explorer. 000 – Model-Data Comparison #################################### THE BLOB We discussed the demise of the short-term reappearance of The Blob in the post THE BLOB has Dissipated. And we discussed the cooling in the extratropical Pacific above and in the post Something to Keep an Eye On – The Large Blue Ribbon of Below-Normal Sea Surface Temperatures in the North Pacific.  I’ll stop presenting the sea surface temperature anomalies for The Blob region now that I’ve shown the enormous decline there since September…a drop in excess of -2.5 deg C.  (16) The Blob (40N-50N, 150W-130W) Monthly Change = -1.43 deg C Note: I’ve changed the coordinates for The Blob to 40N-50N, 150W-130W to agree with those used in the NOAA/NCEP Monthly Ocean Briefing. I had been using the coordinates of 35N-55N, 150W-125W for The Blob. HURRICANE MAIN DEVELOPMENT REGION The sea surface temperatures of the tropical North Atlantic are one of the primary factors that contribute to the development and maintenance of hurricanes.  I’ve recently added to the update the sea surface temperatures and anomalies for the hurricane main development region of the North Atlantic. It is often represented by the coordinates of  10N-20N, 80W-20W. While hurricanes tend to form there, they can also form outside it.  Like June 2016, while sea surfaces for the main development region are warmer than “normal”. (17) Hurricane Main Development Region (10N-20N, 80W-20W) Monthly Change (Anomalies) = +0.23 deg C I’ve also included a graph of the November sea surface temperatures (not anomalies) for the Main Development Region. It confirms that sea surface temperatures there are near to being at record highs and that sea surface temperatures of the Main Development Region are above the 26 deg C threshold for hurricane formation, as they normally are during November. NOTE: I’ll discontinue the MDR updates until next year’s hurricane season. INTERESTED IN LEARNING MORE ABOUT HOW DATA SUGGEST THE GLOBAL OCEANS WARMED NATURALLY? Why should you be interested? The hypothesis of manmade global warming depends on manmade greenhouse gases being the cause of the recent warming. But the sea surface temperature record indicates El Niño and La Niña events are responsible for the warming of global sea surface temperature anomalies over the past 32 years, not manmade greenhouse gases.  Scroll back up to the discussion of the East Pacific versus the Rest of the World.  I’ve searched sea surface temperature records for more than 4 years, and I can find no evidence of an anthropogenic greenhouse gas signal.  That is, the warming of the global oceans has been caused by Mother Nature, not anthropogenic greenhouse gases. My e-book (pdf) about the phenomena called El Niño and La Niña is titled Who Turned on the Heat? with the subtitle The Unsuspected Global Warming Culprit, El Niño Southern Oscillation.  It is intended for persons (with or without technical backgrounds) interested in learning about El Niño and La Niña events and in understanding the natural causes of the warming of our global oceans for the past 30 years.  Because land surface air temperatures simply 6 Dec
Early December 2016 La Niña Update: Mixed Signals from NOAA and BOM - Last month on November 10, NOAA issued a La Niña Advisory, indicating weak La Niña conditions existed and that those conditions were “slightly favored to persist (~55% chance) through winter 2016-17.” Let’s see how things are progressing. NOAA’S WEEKLY SATELLITE-ENHANCED REYNOLDS OI.v2 SEA SURFACE TEMPERATURE DATA SHOW A WEAKENING TO ENSO-NEUTRAL CONDITIONS The sea surface temperature anomalies of the NINO3.4 region of the tropical Pacific (coordinates 5S-5N, 170W-120W) are a commonly used index for the timing, strength and duration of El Niño and La Niña events. NOAA’s weekly sea surface temperature anomaly data for the NINO3.4 region based on their original Reynolds OI.v2 data show that surface temperatures there have been in ENSO neutral conditions (not La Niña, not El Niño) for 3 weeks. As of the week centered on Wednesday November 30 and for the two prior weeks, NINO3.4 sea surface temperature anomalies were at -0.4 deg C, which is above the -0.5 deg C threshold of La Niña conditions. (Rounding out the month of November, for the first week, the value was -0.7 deg C.)  See the time-series graph in Figure 1. Figure 1 Note that the horizontal green line is the most recent weekly value, not a trend line. This data are based on NOAA’s original version of their Reynolds OI.v2 satellite-enhanced sea surface temperature dataset. The anomalies are referenced to the base period of 1981-2010. This is not the dataset that NOAA uses for their “official” ENSO indices. NOAA’S MONTHLY IN SITU-ONLY ERSST.v4 SEA SURFACE TEMPERATURE DATA (WITH A FIXED SET OF BASE YEARS FOR ANOMALIES, 1981-2010) SHOW TEMPERATURES WELL WITHIN WEAK LA NIÑA CONDITIONS FOR NOVEMBER AND STRENGTHENING SLIGHTLY Again we’re looking at sea surface temperature data for the NINO3.4 region, but this time we’re looking at a version based on NOAA’s ERSST.v4 monthly “pause buster” sea surface temperature data, which is based solely on observations from buoys and ship inlets, no satellite-based data.  This is the dataset that NOAA uses for their “official” ENSO index but it is referenced to the fixed base years of 1981-2010…while NOAA takes a few additional steps for their “official” Oceanic NINO Index. The monthly ERSST.v4-based data for November 2016 show NINO3.4 sea surface temperature anomalies well within the realm of weak La Niña conditions, at -0.82 deg C. See Figure 2. The October value was -0.8 deg C. Figure 2 NOAA’S MONTHLY IN SITU-ONLY ERSST.v4 SEA SURFACE TEMPERATURE DATA (WITH SHIFTING BASE YEARS FOR ANOMALIES) SHOW SLIGHTLY STRONGER LA NIÑA CONDITIONS WITH A MORE NOTICEABLE STRENGTHENING As opposed to using a fixed 30-year based period for the ERSST.v4-based NINO3.4 anomalies in their “official” Oceanic NINO Index, NOAA uses multiple 30-year periods that shift every 5 years.  See the NOAA explanation here. NOAA claims they’ve taken this curious approach “to remove this [global] warming trend” on the equatorial Pacific sea surface temperature data.  We revealed, however, in the 2012 post Comments on NOAA’s Recent Changes to the Oceanic NINO Index (ONI) that the global “warming trend” in NINO3.4 sea surface temperature data resulted primarily from the impact of the well-known and naturally occurring 1976 Pacific climate shift.  Apparently, NOAA does (oops) doesn’t want mother nature to be responsible for even localized warming.  This, of course, renders the Oceanic NINO Index useless for realistic climate studies. Regardless, NOAA has adopted this odd approach to calculate the sea surface temperature anomaly values for their “official” Ocean NINO Index. The monthly NINO3.4 values that are input to the Ocean NINO Index are shown in Figure 3.  The November 2016 NINO3.4 sea surface temperature “anomaly” for this altered dataset is -0.92 deg C, which is approaching the -1.0 deg C threshold of a moderately strong La Niña.  From October to November 2016, this modified dataset shows a noticeable strengthening of -0.05 deg C. Figure 3 So it appears that NOAA is working hard at making the 2016/17 La Niña an “official” reality. Note: NOAA then uses a 3-month running average of this altered monthly NINO3.4-based data for their Oceanic NINO Index. THE SOUTHERN OSCILLATION INDEX FROM AUSTRALIA’S BOM CONTINUES TO SHOW ENSO NEUTRAL CONDITIONS The Southern Oscillation Index (SOI) from Australia’s Bureau of Meteorology is another widely used reference for the strength, frequency and duration of El Niño and La Niña events. We discussed the Southern Oscillation Index in Part 8 of the 2014/15 El Niño series. It is derived from the sea level pressures of Tahiti and Darwin, Australia, and as such it reflects the wind patterns off the equator in the southern tropical Pacific. With the Southern Oscillation Index, El Niño events are strong negative values and La Niñas are strong positive values, which is the reverse of what we see with sea surface temperature-based indices.  The November 2016 Southern Oscillation Index shows ENSO neutral conditions exist in the tropical Pacific…with a value is -0.7, which is the sign opposite to those of La Niña conditions. (The BOM threshold for La Niña conditions is an SOI value of +8.0.)  According to the SOI, we briefly made it into La Niña conditions in September and since then, ENSO neutral. Figure 4 presents a time-series graph of the SOI data. Figure 4 Again, the horizontal green line is the most recent monthly value, not a trend line. Also see the BOM Recent (preliminary) Southern Oscillation Index (SOI) values webpage. The current 30-day running average and the 90-day average are still in ENSO neutral conditions. CLOSING As noted in the title, we’re getting mixed signals from NOAA and BOM, and from NOAA itself, about the existence of La Niña conditions on the tropical Pacific. WANT TO LEARN HOW EL NIÑO AND LA NIÑA EVENTS CONTRIBUTE TO LONG-TERM GLOBAL WARMING? I published On Global Warming and the Illusion of Control (25MB .pdf) back in November 2015. The introductory post is here. That 700+ page climate change reference is free. Chapter 3.7 includes detailed discussions of El Niño events and their aftereffects…though not as detailed as in Who Turned on the Heat? My ebook Who Turned on the Heat? – The Unsuspected Global Warming Culprit: El Niño-Southern Oscillation (23MB .pdf) goes into a tremendous amount of detail to explain El Niño and La Niña processes and the long-term global-warming aftereffects of strong El Niño events. It too is free. See the introductory post here. Who Turned on the Heat? weighs in at a whopping 550+ pages, about 110,000+ words. It contains somewhere in the neighborhood of 380 color illustrations. In pdf form, it’s about 23MB. It includes links to more than a dozen animations, which allow the reader to view ENSO processes and the interactions between variables. 5 Dec
The Politicization of Climate Science Is NOT a Recent Phenomenon - There’s lots of yacking around the blogosphere and mainstream media about President-elect Donald Trump politicizing climate science. But it’s nothing new. Climate science became a tool for pushing political agendas almost 3 decades ago. In 1988, the United Nations, a political body, founded the global-warming-report-writing entity called the Intergovernmental Panel on Climate Change (IPCC). The IPCC was created to support political agendas. And in 1995, politics corrupted climate science, when politicians changed the language of the IPCC’s second assessment report, eliminating the scientists’ statements of uncertainties.  To this day, the climate science community still cannot truly differentiate between natural and anthropogenic global warming.  Why?  The climate models used in attribution studies still cannot simulate modes of natural variability that can cause global warming over multidecadal timeframes.  INTRODUCTION President Elect Donald Trump’s skepticism of human-induced global  warming/climate change had been one of the focuses of the mainstream media during the U.S. elections and remains so in the minds of many environmentalists and their associates in the media. A plethora of articles and talking-head clips have been published and broadcast, bringing the political nature of climate science to the public eye once again. But how long ago did climate science become politicized? I was reminded of the answer to that question while reading Dr. Roy Spencer’s recent blog post Global Warming: Policy Hoax versus Dodgy Science.  (Great title!) There Dr. Spencer begins: In the early 1990s I was visiting the White House Science Advisor, Sir Prof. Dr. Robert Watson, who was pontificating on how we had successfully regulated Freon to solve the ozone depletion problem, and now the next goal was to regulate carbon dioxide, which at that time was believed to be the sole cause of global warming. I was a little amazed at this cart-before-the-horse approach. It really seemed to me that the policy goal was being set in stone, and now the newly-formed United Nations Intergovernmental Panel on Climate Change (IPCC) had the rather shady task of generating the science that would support the policy. THE SHADY TASK OF GENERATING THE SCIENCE TO SUPPORT POLICY To reinforce Dr. Spencer’s cart-before-the-horse statement, I’m going to reproduce a portion of the Introduction to my free ebook, a 700+ page reference work, On Global Warming and the Illusion of Control – Part 1.  This portion provides quotations from the United Nations and the Intergovernmental Panel on Climate Change, along with links to the referenced webpages.  Under the heading of YOU’D BE WRONG IF YOU THOUGHT THE IPCC WAS A SCIENTIFIC BODY, I wrote in part: The Intergovernmental Panel on Climate Change (IPCC) is a political entity, not a scientific one. The IPCC begins the opening paragraphs of its History webpage (my boldface): The Intergovernmental Panel on Climate Change was created in 1988. It was set up by the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP) to prepare, based on available scientific information, assessments on all aspects of climate change and its impacts, with a view of formulating realistic response strategies. The initial task for the IPCC as outlined in UN General Assembly Resolution 43/53 of 6 December 1988 was to prepare a comprehensive review and recommendations with respect to the state of knowledge of the science of climate change; the social and economic impact of climate change, and possible response strategies and elements for inclusion in a possible future international convention on climate. Thus, the IPCC was founded to write reports. Granted, they are very detailed reports, so burdensome that few persons read them in their entirety.  Of the few people who read them, most only read the Summaries for Policymakers. But are you aware that the language of the IPCC Summary for Policymakers is agreed to by politicians during week-long meetings?  A draft is written by the scientists for the politicians, but the politicians debate how each sentence is phrased and whether it is to be included in the summary.  And those week-long political debates about the Summary for Policymakers are closed to the public. Also from that quote above, we can see that the content of IPCC’s reports was intended to support an international climate-change treaty. That 1992 treaty is known as the United Nations Framework Convention on Climate Change (UNFCCC). A copy of the UNFCCC is available here. Under the heading of Article 2 – Objective, the UNFCCC identifies its goal as limiting the emissions of greenhouse gases (my boldface): The ultimate objective of this Convention and any related legal instruments that the Conference of the Parties may adopt is to achieve, in accordance with the relevant provisions of the Convention, stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Because the objective of the UNFCCC treaty is to limit the emissions of man-made greenhouse gases, and because the goal of the IPCC is to prepare reports that supported the treaty, it safe to say the IPCC’s sole role is simply to write scientific reports that support the assumed need to limit greenhouse gas emissions. Hmmm. Do you think that focus might limit scientific investigation and understandings? Later in the opening paragraph of the IPCC’s History webpage, they state (my boldface and caps): Today the IPCC’s role is as defined in Principles Governing IPCC Work, “…to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of HUMAN-INDUCED climate change, its potential impacts and options for adaptation and mitigation. The fact that the IPCC has focused all of their efforts on “understanding the scientific basis of risk of human-induced climate change” is very important.  The IPCC has never realistically tried to determine if natural factors could have caused most of the warming the Earth has experienced over the past century.  For decades, they’ve worn blinders that blocked their views of everything other than the hypothetical impacts of carbon dioxide.  The role of the IPCC has always been to prepare reports that support the reduction of greenhouse gas emissions caused by the burning of fossil fuels.  As a result, that’s where all of the research money goes.  The decision to only study human-induced global warming is a political choice, not a scientific one.  And it’s a horrible choice. As a result of that political choice, there is little scientific research that attempts to realistically determine how much of the warming we’ve experienced is attributable to natural factors. We know this is fact because the current generation of climate models—the most complex climate models to date—still cannot simulate naturally occurring ocean-atmosphere processes that can cause Earth’s surfaces (and the oceans to depth) to warm for multidecadal periods or stop that warming.   Skeptics have confirmed those failings a number of times in blog posts. I even wrote a book about those failings, appropriately titled Climate Models Fail. … [End Reprint] EVEN SHADIER: CHANGING THE SCIENCE TO SUPPORT POLICY Were you aware that politicians revised the text of the IPCC’s second assessment report, drastically changing the draft written by the scientists?  Once again, I’m reproducing a portion of my free ebook On Global Warming and the Illusion of Control – Part 1.  It’s from the heading of THE EVOLUTION OF THE CATASTROPHIC ANTHROPOGENIC GLOBAL WARMING MOVEMENT: While there were early scientific studies that pointed to possible increases in surface temperatures associated with the emissions of man-made greenhouse gases, let’s begin this discussion with the formation of the report-writing wing of the United Nations called the Intergovernmental Panel on Climate Change (IPCC). As discussed above, the primary task of the IPCC was to create reports that supported the politicians’ agendas. Limiting global warming was likely one of those focuses, but most assuredly there were many others. The politicians found scientists to write those reports—so began the mutually beneficial relationship between climate scientists and politicians. The politicians wanted scientific support for their agendas and the scientists were more than willing to oblige because the politicians held the purse strings for climate research. The first IPCC report in 1991 was inconclusive, inasmuch as the scientists could not differentiate between man-made and natural warming… Note for this post: The Policymakers Summary for the IPCC’s first assessment report is here. There they write: The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability. Thus the observed increase could be largely due to this natural variability, alternatively this variability and other human factors could have offset a still larger human-induced greenhouse warming. The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more. So in 1991 the science community was not expecting to be able to differentiate between natural and anthropogenic global warming until 2001 at the earliest. [End note.] …In spite of those uncertain findings, a year later [in 1992] the politicians prepared a treaty called the United Nations Framework Convention on Climate Change with the intent of limiting global temperatures to 2 deg C above pre-industrial values—a limit that was first proposed in the mid-1970s by an economist, not a climate scientist. In the article Two degrees: The history of climate change’s ‘speed limit’ at TheCarbonBrief, authors Mat Hope & Rosamund Pearce write: Perhaps surprisingly, the idea that temperature could be used to guide society’s response to climate change was first proposed by an economist. In the 1970s, Yale professor William Nordhaus alluded to the danger of passing a threshold of two degrees in a pair of now famous papers, suggesting that warming of more than two degrees would push the climate beyond the limits humans were familiar with: “According to most sources the range of variation between distinct climatic regimes is on the order of ±5°C, and at present time the global climate is at the high end of this range. If there were global temperatures more than 2° of [sic] 3° above the current average temperature, this would take the climate outside of the range of observations which have been made over the last several hundred thousand years.” … In the early 1990s, the politicians continued to fling funds at scientists with hope the next report would provide support for their agendas. Much to the politicians’ astonishment, the scientists’ initial draft of the 1995 Summary for Policymakers for the 2nd Assessment Report from the IPCC was still inconclusive. Imagine that. In 1992, the United Nations had convinced many countries around the globe to enter into a treaty to limit emissions of greenhouse gases, when a year before the IPCC could not find mankind’s fingerprint on global warming. Then, by 1995, the politicians’ scientific report-writing body, the IPCC, still could not differentiate between man-made and natural warming, and the climate scientists had stated that fact in the draft of the second IPCC assessment report.  The politicians were between the rock and the hard place. They’d had a treaty in place for 3 years but their report-writing scientists could not find evidence to support it. So, after most of the scientists had left the meeting, the politicians and a lone scientist changed the language of the second IPCC assessment report in a very subtle but meaningful way. Voila! The politicians and one scientist initiated what is now called the consensus. (See the 3-part, very detailed analysis by Bernie Lewin about the 1995 IPCC conference in Madrid.  Part one is here.) … [End Reprint] The three parts of the series by Bernie Lewin about the 1995 IPCC conference in Madrid are appropriately titled: Madrid 1995: Was this the Tipping Point in the Corruption of Climate Science? (archived here.) Madrid 1995 and The Quest for the Mirror in the Sky (archived here.) Madrid 1995: The Last Day of Climate Science (archived here.) Bernie Levin writes about the draft of the IPCC’s second assessment  report in Part 1 of his series (My boldface): Alas, by the early autumn of 1995 the signs were not good. Although a draft leaked in September managed to say that the warming is unlikely to be entirely due to natural causes, this was hardly in dispute, and this was not exactly announcing imminent catastrophe. Moreover, there remained extraordinary strong caveats, especially in Chapter 8, to every positive conclusion. The draft that was circulated to the participants at the Madrid conference, and the only one available when the Report was finally ‘accepted’ by the meeting (see explanation in a following post), also stated in its introduction that results of recent studies point towards a human influence. This was the strongest statement yet, but the body of the document and the concluding summary were not so confident. Some of the boldest retractions were as follows: Of Studies of Changes in Global Mean Variables (8.4.1): ‘While none of these studies has specifically considered the attribution issue, they often draw some attribution conclusions, for which there is little  justification.’ Of the greenhouse signal in studies of modelled and observed spatial and temporal patterns of change (8.4.2.1): ‘none of the studies cited above has shown clear evidence that we can attribute the observed changes to the specific cause of increases in greenhouse gases.’ Of pattern studies ‘fingerprinting’ the global warming (see discussion in later post): While some of the pattern-base studies discussed have claimed detection of a significant climate change, no study to date has positively attributed all or part [of the climate change observed] to [anthropogenic ] causes. Nor has any study quantified the magnitude of a greenhouse gas effect or aerosol effect in the observed data—an issue of primary relevance to policy makers. Of the overall level of uncertainty: Any claims of positive detection and attribution of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced. Of the question: When will an anthropogenic effect on climate be identified? (8.6): It is not surprising that the best answer to this question is, `We do not know.’ [A copy of the 9Oct95 draft of Ch 8 has not been obtained. UPDATE 29June12: 9Oct draft obtained and changes have been verified] The politicians didn’t like the uncertainties expressed in those statements, so they deleted them.  Amazing!  Were you aware that politicians had dictated climate science? Important note: Keep in mind that Mount Pinatubo erupted in 1991, temporarily driving global surface temperatures downward.  While temperatures rebounded by 1995 to a level that was slightly higher than in 1991, the volcanic aerosols spewed into the stratosphere by Mount Pinatubo had produced a noticeable drop in the warming rate since the mid-1970s start of the recent warming period. See Figure 1. That is, the global warming rate from 1975 to 1995 is noticeably lower than the trend from 1975 to 1991, as one would expect. (I’ve used the GISS dTs data in the top graph of Figure 1, because GISS did not begin to use sea surface temperature data in their global temperature data until 1995. I’ve included the GISS Land-Ocean Temperature Index in the lower graph as a reference. Both are current versions of the data) Figure 1 So with the massive impact of Mount Pinatubo on global surface temperatures, one might think that the continued uncertainty by climate scientists was still warranted in 1995. CLIMATE SCIENCE UNDER THE DIRECTION OF THE IPCC STILL CANNOT REALISTICALLY DIFFERENTIATE BETWEEN NATURAL AND HUMAN-INDUCED GLOBAL WARMING Once again, let me borrow a discussion from my free ebook On Global Warming and the Illusion of Control – Part 1.  It’s Chapter 1.12 – How Scientists Attributed Most of the Global Warming Since the Mid-1970s to Man-made Causes: One of the objectives of the climate science community under the direction of the IPCC has been to attribute most of the global warming since the mid-1970s to man-made causes. In other words, if Mother Nature was responsible for the warming, the political goal to limit the use of fossil fuels would have no foundation, and because the intent of the IPCC is to support political agendas, the climate science community had to be able to point to mankind as the culprit. The climate modelers achieved that goal using a few very simple tactics. The first thing climate modelers did was they ignored the naturally occurring ocean-atmosphere processes that contribute to or suppress global warming. The climate models used by the IPCC still to this day cannot simulate those processes properly, and we’ll illustrate that fact very plainly later in this book.  Ignoring Mother Nature’s contributions was the simplest and most-convenient way to show humans are responsible for the warming.  The modelers also elected not to disclose this fact to the public when they presented their modeled-based attribution studies using the next tactic. That tactic is a very simple and easy-to-understand way to falsely attribute most of the warming to mankind. The modelers had their climate model runs that showed virtual global surface temperatures warming in response to all the climate forcings that are used as inputs to the models. They then performed additional modeling experiments.  Instead of using all of the climate forcings they typically include in their simulations of past climate, they only used the natural climate forcings of solar radiation and volcanic aerosols in the extra climate model runs. The flawed logic: if the models run with only solar radiation and volcanic aerosols (natural forcings) cannot simulate the warming we’ve experienced in the late 20th century, and if the models run with natural and anthropogenic forcings can simulate the warming, then the warming since the 1970s had to be caused by man-made greenhouse gases. As an example, Figure 1.12-1 is a time-series graph that runs from 1880 to 2010. The solid brown curve shows the net radiative forcing of all forcings that are used as inputs to the climate models prepared by the Goddard Institute for Space Studies (GISS).  They’re from the Forcings in GISS Climate Model webpage, specifically the table here. (In Chapter 2.3, we will illustrate the forcings individually.) Also included in Figure 1.12-1 is the net of only the solar irradiance (sunlight) and stratospheric aerosols (sunlight-blocking volcanic aerosols), shown as the dark green dashed curve; they are considered naturally occurring forcings. As we can see, the group with all of the forcings shows a long-term increase, while the combined forcings from the sun and volcanos do not. # # # The climate scientists then ran the additional model simulations with only the natural forcings. They then compare the model simulations using natural and man-made forcings with the models run with the natural forcings only. An example of one of those comparisons is shown in Figure 1.12-2. The models run with man-made and natural forcings show considerable warming in the late 20th Century and the models run with only natural forcings do not show the warming. Graphs similar to the one shown in Figure 1.12-2 can be found in the 4th and 5th Assessment Reports from the IPCC. One example is FAQ 10.1, Figure 1 from Chapter 10, Detection and Attribution of Climate Change: from Global to Regional of the IPCC’s 5th Assessment Report (AR5). See my Figure 1.12-3.   Note that the title of their FAQ 10.1 is “Climate Is Always Changing. How Do We Determine the Causes of Observed Changes?” Note: The citation required by the IPCC for the use of their illustration is at the end of the chapter. [End note.] About their FAQ10.1, Figure 1, the IPCC writes: FAQ 10.1, Figure 1 illustrates part of a fingerprint assessment of global temperature change at the surface during the late 20th century. The observed change in the latter half of the 20th century, shown by the black time series in the left panels, is larger than expected from just internal variability. Simulations driven only by natural forcings (yellow and blue lines in the upper left panel) fail to reproduce late 20th century global warming at the surface with a spatial pattern of change (upper right) completely different from the observed pattern of change (middle right). Simulations including both natural and human-caused forcings provide a much better representation of the time rate of change (lower left) and spatial pattern (lower right) of observed surface temperature change. Both panels on the left show that computer models reproduce the naturally forced surface cooling observed for a year or two after major volcanic eruptions, such as occurred in 1982 and 1991. Natural forcing simulations capture the short-lived temperature changes following eruptions, but only the natural + human caused forcing simulations simulate the longer-lived warming trend. The caption for their FAQ 10.1, Figure reads: FAQ 10.1, Figure 1 | (Left) Time series of global and annual-averaged surface temperature change from 1860 to 2010. The top left panel shows results from two ensemble [sic] of climate models driven with just natural forcings, shown as thin blue and yellow lines; ensemble average temperature changes are thick blue and red lines. Three different observed estimates are shown as black lines. The lower left panel shows simulations by the same models, but driven with both natural forcing and human-induced changes in greenhouse gases and aerosols. (Right) Spatial patterns of local surface temperature trends from 1951 to 2010. The upper panel shows the pattern of trends from a large ensemble of Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations driven with just natural forcings. The bottom panel shows trends from a corresponding ensemble of simulations driven with natural + human forcings. The middle panel shows the pattern of observed trends from the Hadley Centre/Climatic Research Unit gridded surface temperature data set 4 (HadCRUT4) during this period. For another example of this misguided, childish logic, see Figure 9.5 from their 4th Assessment Report. The text of AR4 Chapter 9 for that illustration reads (my brackets and boldface): 9.4.1.2 Simulations of the 20th Century There are now a greater number of climate simulations from AOGCMs [Atmosphere-Ocean General Circulation Models] for the period of the global surface instrumental record than were available for the TAR [Third Assessment Report], including a greater variety of forcings in a greater variety of combinations. These simulations used models with different climate sensitivities, rates of ocean heat uptake and magnitudes and types of forcings (Supplementary Material, Table S9.1). Figure 9.5 shows that simulations that incorporate anthropogenic forcings, including increasing greenhouse gas concentrations and the effects of aerosols, and that also incorporate natural external forcings provide a consistent explanation of the observed temperature record, whereas simulations that include only natural forcings do not simulate the warming observed over the last three decades. As mentioned earlier, the logic behind this type of attribution is very simple, childishly simple. If models that include anthropogenic and natural forcings can simulate the warming, and if the models that include only natural forcings cannot simulate the warming, then the anthropogenic forcings must be responsible for the global warming. But the logic is flawed—fatally flawed. There are naturally occurring ocean-atmosphere processes that can cause global surface temperatures to warm and cool without being forced to do so by man-made greenhouse gases. The climate models do not simulate those processes so they are not considered in attribution studies like this. There’s another way to look at this. One of the greatest climate-model failings is their inability to simulate naturally occurring ocean-atmosphere processes…like those associated with El Niño and La Niña events, like those associated with the Atlantic Multidecadal Oscillation. We’ll present those failings later in the book. So like anyone trying to market a flawed product, the crafty IPCC turned those failings into a positive by ignoring them in their attribution studies. [End Reprint] Yup, that’s a pretty pathetic way to attribute the recent bout of global warming to anthropogenic greenhouse gases. CLOSING Is President-elect Donald Trump correct to be skeptical of the politicized science behind hypothetical human-induced global warming/climate change?  Of course, he is. Climate science was politicized in 1988 when the UN’s politicians founded and provided direction to the Intergovernmental Panel on Climate Change, the IPCC. Climate science was corrupted by politics in 1995, more than 2 decades ago, when politicians changed the language of the second assessment report of the IPCC.  And, of course, climate scientists still to this day cannot realistically attribute to manmade causes the global warming we’ve experienced since the 1970s, because climate models cannot simulate naturally occurring, naturally fueled coupled ocean-atmosphere processes that can cause global surfaces to warm over multidecadal timeframes. The fact that climate models cannot simulate any warming unless they are forced by numerical representations of manmade greenhouse gases is a model failing, not a means to credibly attr29 Nov
October 2016 Global Surface (Land+Ocean) and Lower Troposphere Temperature Anomaly Update - This post provides updates of the values for the three primary suppliers of global land+ocean surface temperature reconstructions—GISS through October 2016 and HADCRUT4 and NOAA NCEI (formerly NOAA NCDC) through September 2016—and of the two suppliers of satellite-based lower  troposphere temperature composites (RSS and UAH) through October 2016. It also includes a few model-data comparisons. This is simply an update, but it includes a good amount of background information for those new to the datasets. Because it is an update, there is no overview or summary for this post. There are, however, summaries for the individual datasets. So for those familiar with the  datasets, simply fast-forward to the graphs and read the summaries under the heading of “Update”.   INITIAL NOTES: We discussed and illustrated the impacts of the adjustments to surface temperature data in the posts: Do the Adjustments to Sea Surface Temperature Data Lower the Global Warming Rate? UPDATED: Do the Adjustments to Land Surface Temperature Data Increase the Reported Global Warming Rate? Do the Adjustments to the Global Land+Ocean Surface Temperature Data Always Decrease the Reported Global Warming Rate? The NOAA NCEI product is the new global land+ocean surface reconstruction with the manufactured warming presented in Karl et al. (2015). For summaries of the oddities found in the new NOAA ERSST.v4 “pause-buster” sea surface temperature data see the posts: The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014 Even though the changes to the ERSST reconstruction since 1998 cannot be justified by the night marine air temperature product that was used as a reference for bias adjustments (See comparison graph here), and even though NOAA appears to have manipulated the parameters (tuning knobs) in their sea surface temperature model to produce high warming rates (See the post here), GISS also switched to the new “pause-buster” NCEI ERSST.v4 sea surface temperature reconstruction with their July 2015 update. The UKMO also recently made adjustments to their HadCRUT4 product, but they are minor compared to the GISS and NCEI adjustments. We’re using the UAH lower troposphere temperature anomalies Release 6.5 for this post even though it’s in beta form. And for those who wish to whine about my portrayals of the changes to the UAH and to the GISS and NCEI products, see the post here. The GISS LOTI surface temperature reconstruction and the two lower troposphere temperature composites are for the most recent month. The HADCRUT4 and NCEI products lag one month. Much of the following text is boilerplate that has been updated for all products. The boilerplate is intended for those new to the presentation of global surface temperature anomalies. Most of the graphs in the update start in 1979. That’s a commonly used start year for global temperature products because many of the satellite-based temperature composites start then. We discussed why the three suppliers of surface temperature products use different base years for anomalies in chapter 1.25 – Many, But Not All, Climate Metrics Are Presented in Anomaly and in Absolute Forms of my free ebook On Global Warming and the Illusion of Control – Part 1 (25MB). Since the July 2015 update, we’re using the UKMO’s HadCRUT4 reconstruction for the model-data comparisons using 61-month filters. And I’ve resurrected the model-data 30-year trend comparison using the GISS Land-Ocean Temperature Index (LOTI) data. For a continued change of pace, let’s start with the lower troposphere temperature data. I’ve left the illustration numbering as it was in the past when we began with the surface-based data. UAH LOWER TROPOSPHERE TEMPERATURE ANOMALY COMPOSITE (UAH TLT) Special sensors (microwave sounding units) aboard satellites have orbited the Earth since the late 1970s, allowing scientists to calculate the temperatures of the atmosphere at various heights above sea level (lower troposphere, mid troposphere, tropopause and lower stratosphere). The atmospheric temperature values are calculated from a series of satellites with overlapping operation periods, not from a single satellite. Because the atmospheric temperature products rely on numerous satellites, they are known as composites. The level nearest to the surface of the Earth is the lower troposphere. The lower troposphere temperature composite include the altitudes of zero to about 12,500 meters, but are most heavily weighted to the altitudes of less than 3000 meters. See the left-hand cell of the illustration here. The monthly UAH lower troposphere temperature composite is the product of the Earth System Science Center of the University of Alabama in Huntsville (UAH). UAH provides the lower troposphere temperature anomalies broken down into numerous subsets. See the webpage here. The UAH lower troposphere temperature composite are supported by Christy et al. (2000) MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons.  Additionally, Dr. Roy Spencer of UAH presents at his blog the monthly UAH TLT anomaly updates a few days before the release at the UAH website. Those posts are also regularly cross posted at WattsUpWithThat. UAH uses the base years of 1981-2010 for anomalies. The UAH lower troposphere temperature product is for the latitudes of 85S to 85N, which represent more than 99% of the surface of the globe. UAH recently released a beta version of Release 6.0 of their atmospheric temperature product. Those enhancements lowered the warming rates of their lower troposphere temperature anomalies. See Dr. Roy Spencer’s blog post Version 6.0 of the UAH Temperature Dataset Released: New LT Trend = +0.11 C/decade and my blog post New UAH Lower Troposphere Temperature Data Show No Global Warming for More Than 18 Years. The UAH lower troposphere anomaly data, Release 6.5 beta, through October 2016 are here. Update: The October 2016 UAH (Release 6.5 beta) lower troposphere temperature anomaly is +0.41 deg C.  It dropped slightly since September (a decrease of about -0.03 deg C). Figure 4 – UAH Lower Troposphere Temperature (TLT) Anomaly Composite – Release 6.5 Beta RSS LOWER TROPOSPHERE TEMPERATURE ANOMALY COMPOSITE (RSS TLT) Like the UAH lower troposphere temperature product, Remote Sensing Systems (RSS) calculates lower troposphere temperature anomalies from microwave sounding units aboard a series of NOAA satellites. RSS describes their product at the Upper Air Temperature webpage.   The RSS product is supported by Mears and Wentz (2009) Construction of the Remote Sensing Systems V3.2 Atmospheric Temperature Records from the MSU and AMSU Microwave Sounders. RSS also presents their lower troposphere temperature composite in various subsets. The land+ocean TLT values are here. Curiously, on that webpage, RSS lists the composite as extending from 82.5S to 82.5N, while on their Upper Air Temperature webpage linked above, they state: We do not provide monthly means poleward of 82.5 degrees (or south of 70S for TLT) due to difficulties in merging measurements in these regions. Also see the RSS MSU & AMSU Time Series Trend Browse Tool. RSS uses the base years of 1979 to 1998 for anomalies. Note: RSS recently release new versions of the mid-troposphere temperature (TMT) and lower stratosphere temperature (TLS) products.  So far, their lower troposphere temperature product has not been updated to this new version. Update: The October 2016 RSS lower troposphere temperature anomaly is +0.35 deg C.  It dropped noticeably (a downtick of +0.23 deg C) since September 2016. Figure 5 – RSS Lower Troposphere Temperature (TLT) Anomalies GISS LAND OCEAN TEMPERATURE INDEX (LOTI) Introduction: The GISS Land Ocean Temperature Index (LOTI) reconstruction is a product of the Goddard Institute for Space Studies. Starting with the June 2015 update, GISS LOTI uses the new NOAA Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4), the pause-buster reconstruction, which also infills grids without temperature samples. For land surfaces, GISS adjusts GHCN and other land surface temperature products via a number of methods and infills areas without temperature samples using 1200km smoothing. Refer to the GISS description here.   Unlike the UK Met Office and NCEI products, GISS masks sea surface temperature data at the poles, anywhere seasonal sea ice has existed, and they extend land surface temperature data out over the oceans in those locations, regardless of whether or not sea surface temperature observations for the polar oceans are available that month.  Refer to the discussions here and here. GISS uses the base years of 1951-1980 as the reference period for anomalies. The values for the GISS product are found here. (I archived the former version here at the WaybackMachine.) Update: The October 2016 GISS global temperature anomaly is +0.89 deg C. According to the GISS LOTI data, global surface temperature anomalies made a slight downtick in October, a -0.01 deg C decrease. Figure 1 – GISS Land-Ocean Temperature Index NCEI GLOBAL SURFACE TEMPERATURE ANOMALIES (LAGS ONE MONTH) NOTE: The NCEI only produces the product with the manufactured-warming adjustments presented in the paper Karl et al. (2015). As far as I know, the former version of the reconstruction is no longer available online. For more information on those curious NOAA adjustments, see the posts: NOAA/NCDC’s new ‘pause-buster’ paper: a laughable attempt to create warming by adjusting past data More Curiosities about NOAA’s New “Pause Busting” Sea Surface Temperature Dataset Open Letter to Tom Karl of NOAA/NCEI Regarding “Hiatus Busting” Paper NOAA Releases New Pause-Buster Global Surface Temperature Data and Immediately Claims Record-High Temps for June 2015 – What a Surprise! And more recently: Pause Buster SST Data: Has NOAA Adjusted Away a Relationship between NMAT and SST that the Consensus of CMIP5 Climate Models Indicate  Should Exist? The Oddities in NOAA’s New “Pause-Buster” Sea Surface Temperature Product – An Overview of Past Posts On the Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014 Introduction: The NOAA Global (Land and Ocean) Surface Temperature Anomaly reconstruction is the product of the National Centers for Environmental Information (NCEI), which was formerly known as the National Climatic Data Center (NCDC). NCEI merges their new “pause buster” Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4) with the new Global Historical Climatology Network-Monthly (GHCN-M) version 3.3.0 for land surface air temperatures. The ERSST.v4 sea surface temperature reconstruction infills grids without temperature samples in a given month. NCEI also infills land surface grids using statistical methods, but they do not infill over the polar oceans when sea ice exists. When sea ice exists, NCEI leave a polar ocean grid blank. The source of the NCEI values is through their Global Surface Temperature Anomalies webpage. Click on the link to Anomalies and Index Data.) Update (Lags One Month): The September 2016 NCEI global land plus sea surface temperature anomaly was +0.89 deg C.  See Figure 2. It remained relatively flat (a decrease of about -0.01 deg C) since August 2016. Figure 2 – NCEI Global (Land and Ocean) Surface Temperature Anomalies UK MET OFFICE HADCRUT4 (LAGS ONE MONTH) Introduction: The UK Met Office HADCRUT4 reconstruction merges CRUTEM4 land-surface air temperature product and the HadSST3 sea-surface temperature (SST) reconstruction. CRUTEM4 is the product of the combined efforts of the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia. And HadSST3 is a product of the Hadley Centre. Unlike the GISS and NCEI reconstructions, grids without temperature samples for a given month are not infilled in the HADCRUT4 product.  That is, if a 5-deg latitude by 5-deg longitude grid does not have a temperature anomaly value in a given month, it is left blank. Blank grids are indirectly assigned the average values for their respective hemispheres before the hemispheric values are merged.  The HADCRUT4 reconstruction is described in the Morice et al (2012) paper here. The CRUTEM4 product is described in Jones et al (2012) here. And the HadSST3 reconstruction is presented in the 2-part Kennedy et al (2012) paper here and here. The UKMO uses the base years of 1961-1990 for anomalies.  The monthly values of the HADCRUT4 product can be found here. Update (Lags One Month): The September 2016 HADCRUT4 global temperature anomaly is +0.71 deg C. See Figure 3.  It had a downtick from August to September 2016, a decrease of about -0.05 deg C. Figure 3 – HADCRUT4 COMPARISONS The GISS, HADCRUT4 and NCEI global surface temperature anomalies and the RSS and UAH lower troposphere temperature anomalies are compared in the next three time-series graphs. Figure 6 compares the five global temperature anomaly products starting in 1979. Again, due to the timing of this post, the HADCRUT4 and NCEI updates lag the UAH, RSS, and GISS products by a month. For those wanting a closer look at the more recent wiggles and trends, Figure 7 starts in 1998, which was the start year used by von Storch et al (2013) Can climate models explain the recent stagnation in global warming? They, of course, found that the CMIP3 (IPCC AR4) and CMIP5 (IPCC AR5) models could NOT explain the recent slowdown in warming, but that was before NOAA manufactured warming with their new ERSST.v4 reconstruction…and before the strong El Niño of 2015/16. Figure 8 starts in 2001, which was the year Kevin Trenberth chose for the start of the warming slowdown in his RMS article Has Global Warming Stalled? Because the suppliers all use different base years for calculating anomalies, I’ve referenced them to a common 30-year period: 1981 to 2010. Referring to their discussion under FAQ 9 here, according to NOAA: This period is used in order to comply with a recommended World Meteorological Organization (WMO) Policy, which suggests using the latest decade for the 30-year average. The impacts of the unjustifiable, excessive adjustments to the ERSST.v4 reconstruction are visible in the two shorter-term comparisons, Figures 7 and 8. That is, the short-term warming rates of the new NCEI and GISS reconstructions are noticeably higher than the HADCRUT4 data. See the June 2015 update for the trends before the adjustments. Figure 6 – Comparison Starting in 1979 ##### Figure 7 – Comparison Starting in 1998 ##### Figure 8 – Comparison Starting in 2001 Note also that the graphs list the trends of the CMIP5 multi-model mean (historic through 2005 and RCP8.5 forcings afterwards), which are the climate models used by the IPCC for their 5th Assessment Report.  The metric presented for the models is surface temperature, not lower troposphere. AVERAGE Figure 9 presents the average of the GISS, HADCRUT and NCEI land plus sea surface temperature anomaly reconstructions and the average of the RSS and UAH lower troposphere temperature composites. Again because the HADCRUT4 and NCEI products lag one month in this update, the most current monthly average only includes the GISS product. Figure 9 – Average of Global Land+Sea Surface Temperature Anomaly Products MODEL-DATA COMPARISON & DIFFERENCE As noted above, the models in this post are represented by the CMIP5 multi-model mean (historic through 2005 and RCP8.5 forcings afterwards), which are the climate models used by the IPCC for their 5th Assessment Report. Considering the uptick in surface temperatures in 2014, 2015 and now 2016 (see the posts here and here), government agencies that supply global surface temperature products have been touting “record high” combined global land and ocean surface temperatures. Alarmists happily ignore the fact that it is easy to have record high global temperatures in the midst of a hiatus or slowdown in global warming, and they have been using the recent record highs to draw attention away from the difference between observed global surface temperatures and the IPCC climate model-based projections of them. There are a number of ways to present how poorly climate models simulate global surface temperatures. Normally they are compared in a time-series graph.  See the example in Figure 10. In that example, the UKMO HadCRUT4 land+ocean surface temperature reconstruction is compared to the multi-model mean of the climate models stored in the CMIP5 archive, which was used by the IPCC for their 5th Assessment Report. The reconstruction and model outputs have been smoothed with 61-month running-mean filters to reduce the monthly variations.  The climate science community commonly uses a 5-year running-mean filter (basically the same as a 61-month filter) to minimize the impacts of El Niño and La Niña events, as shown on the GISS webpage here. Using a 5-year running mean filter has been commonplace in global temperature-related studies for decades. (See Figure 13 here from Hansen and Lebedeff 1987 Global Trends of Measured Surface Air Temperature.) Also, the anomalies for the reconstruction and model outputs have been referenced to the period of 1880 to 2013 so not to bias the results.  That is, by using the almost the full term of the data, no one with the slightest bit of common sense can claim I’ve cherry picked the base years for anomalies with this comparison. Figure 10 It’s very hard to overlook the fact that, over the past decade, climate models are simulating way too much warming…even with the small recent El Niño-related uptick in the data. Another way to show how poorly climate models perform is to subtract the observations-based reconstruction from the average of the model outputs (model mean). We first presented and discussed this method using global surface temperatures in absolute form. (See the post 16 Nov
October 2016 Sea Surface Temperature (SST) Anomaly Update - MONTHLY SEA SURFACE TEMPERATURE ANOMALY MAP The following is a Global map of Reynolds OI.v2 Sea Surface Temperature (SST) anomalies for October 2016.  It was downloaded from the KNMI Climate Explorer. The contour range was set to -2.5 to +2.5 deg C and the anomalies are referenced to the WMO-preferred period of 1981-2010. October 2016 Sea Surface Temperature (SST) Anomalies Map (Global SST Anomaly = +0.32 deg C) MONTHLY GLOBAL OVERVIEW The global Sea Surface Temperature anomaly for October 2016 shows basically no change since September. A downtick in the Northern Hemisphere (-0.05 deg C) was countered by an uptick in the Southern Hemisphere (+0.03 deg C). Last month, the North Pacific was the basin to show the most cooling (-0.07 deg C), while the South Pacific (+0.05 deg C) and the Southern Ocean (+0.06 deg C) showed the most warming.  Monthly sea surface temperature anomalies for the NINO3.4 continue to show weak La Niña conditions.     The monthly Global Sea Surface Temperature anomalies are presently at +0.32 deg C, referenced to the WMO-preferred base years of 1981 to 2010. (1)Global Sea Surface Temperature Anomalies Monthly Change = 0.00 deg C THE EQUATORIAL PACIFIC The monthly NINO3.4 Sea Surface Temperature anomalies for October 2016 are continuing to decline and continue to be below the threshold of a La Niña.  They were at -0.74 deg C, a decrease since the prior  month…having declined about -0.12 deg C since September.  (Also see the Weekly data shown near the end of the post.) (2) NINO3.4 Sea Surface Temperature Anomalies (5S-5N, 170W-120W) Monthly Change = -0.12 deg C #################################### The sea surface temperature anomalies for the NINO3.4 region in the east-central equatorial Pacific (5S-5N, 170E-120E) are a commonly used index for the strength, frequency and duration of El Niño and La Nina events.  We keep an eye on the sea surface temperatures there because El Niño and La Niña events are the primary cause of the yearly variations in global sea surface temperatures AND they are the primary cause of the long-term warming of global sea surface temperatures over the past 30+ years.   See the discussion of the East Pacific versus the Rest-of-the-World that follows.  We present NINO3.4 sea surface temperature anomalies in monthly and weekly formats in these updates. Also see the weekly values toward the end of the post. INITIAL NOTES Note 1: I’ve downloaded the Reynolds OI.v2 data from the KNMI Climate Explorer, using the base years of 1981-2010.  The updated base years help to reduce the seasonal components in the ocean-basin subsets—they don’t eliminate those seasonal components, but they reduce them. Note 2: We discussed the reasons for the elevated sea surface temperatures in 2014 in the post On The Recent Record-High Global Sea Surface Temperatures – The Wheres and Whys.  For 2015, The Blob and the El Niño are responsible for the noticeable increases.  See General Discussion 3 – On the Reported Record High Global Surface Temperatures in 2015 – And Will Those Claims Continue in 2016? in my ebook On Global Warming and the Illusion of Control – Part 1. Note 3:  I’ve moved the model-data comparison to the end of the post. Note 4: I recently added a graph of the sea surface temperature anomalies for The Blob in the eastern extratropical North Pacific. It also is toward the end of the post. Note 5:  The sea surface temperature data in this post is the original (weekly/monthly, 1-deg spatial resolution) version of NOAA’s Optimum Interpolation (OI) Sea Surface Temperature (SST) v2 (aka Reynolds OI.v2)…not the (over-inflated, out-of-the-ballpark, extremely high warming rate) high-resolution, daily version of NOAA’s Reynolds OI.v2 data, which we illustrated and discussed in the recent post On the  Monumental Differences in Warming Rates between Global Sea Surface Temperature Datasets during the NOAA-Picked Global-Warming Hiatus Period of 2000 to 2014. THE EAST PACIFIC VERSUS THE REST OF THE WORLD NOTE:  This section of the updates has been revised.  We discussed the reasons for the changes in the post Changes to the Monthly Sea Surface Temperature Anomaly Updates. For years, we have shown and discussed that the surfaces of the global oceans have not warmed uniformly during the satellite era of sea surface temperature composite. In fact, some portions of the global oceans have cooled during that 3+ decade period.   One simply has to look at a trend map for the period of 1982 to 2013 to see where the ocean surfaces had warmed and where they had not.  Yet the climate science community has not addressed this.  See the post Maybe the IPCC’s Modelers Should Try to Simulate Earth’s Oceans. The North Atlantic (anomalies illustrated later in the post) has had the greatest warming over the past 3+ decades, but the reason for this is widely known.  The North Atlantic has an additional mode of natural variability called the Atlantic Multidecadal Oscillation.  If you’re not familiar with the Atlantic Multidecadal Oscillation see the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO) webpage and the posts An Introduction To ENSO, AMO, and PDO — Part 2 and Multidecadal Variations and Sea Surface Temperature Reconstructions.  As a result of the Atlantic Multidecadal Oscillation, the surface of the North Atlantic warmed at a rate that was more than twice the rate of the surface of the rest of the global oceans.  See the trend comparison graph here. The East Pacific Ocean also stands out in the trend map linked above.  Some portions of its surfaces warmed and others cooled.  It comes as no surprise then that the linear trend of the East Pacific (90S-90N, 180-80W) Sea Surface Temperature anomalies since the start of the Reynolds OI.v2 composite is so low.  With the strong El Nino conditions in the eastern tropical Pacific and The Blob, it has acquired a slight positive trend, but it’s still far below the approximate +0.15 deg C/decade warming rate predicted by the CMIP5 climate models. Please see Figure 19 in the post Maybe the IPCC’s Modelers Should Try to Simulate Earth’s Oceans. (Note that the region also includes portions of the Arctic and Southern Oceans.)  That is, there has been little warming of the sea surfaces of the East Pacific (from pole to pole) in 3-plus decades.  The East Pacific is not a small region.  It represents about 33% of the surface area of the global oceans. Notice how there appears to have been a strong El Niño event in 2014 in the East Pacific values, while there had only been a small event that year, and how the strong El Niño in 2015 caused a further rise.  Note also how there appears to have been a shift in 2013. Refer again to the post On The Recent Record-High Global Sea Surface Temperatures – The Wheres and Whys. (3) East Pacific Sea Surface Temperature (SST) Anomalies (90S-90N, 180-80W) #################################### That leaves the largest region of the trend map, which includes the South Atlantic, the Indian and West Pacific Oceans, with the corresponding portions of the Arctic and Southern Oceans.  Sea surface temperatures there warmed in very clear steps, in response to the significant 1986/87/88 and 1997/98 El Niño/La Niña events.  It also appears as though the sea surface temperature anomalies of this subset have made another upward shift in response to the 2009/10 El Niño and 2010/11 La Niña events.  I further described the ENSO-related processes that cause these upward steps in the recent post Answer to the Question Posed at Climate Etc.: By What Mechanism Does an El Niño Contribute to Global Warming? As you’ll note, the values for the South Atlantic, Indian and West Pacific Oceans appear now to be responding to the El Nino. And it appears we are seeing another El Niño-related uptick. (4) Sea Surface Temperature Anomalies of The South Atlantic-Indian-West Pacific Oceans (Weighted Average of 0-90N, 40E-180 @ 27.9% And 90S-0, 80W-180 @72.1%) #################################### NOTE:  I have updated the above illustration and following discussion, because NOAA has recently revised their Oceanic NINO Index…once again.  They’ve used the base years of 1986-2015 for the most recent data, which has resurrected the 2014/15 El Niño. The periods used for the average temperature anomalies for the South Atlantic-Indian-West Pacific subset between the significant El Niño events of 1982/83, 1986/87/88, 1997/98, 2009/10 and 2015/16 are determined as follows.  Using the most recent NOAA Oceanic Nino Index (ONI) for the official months of those El Niño events, I shifted (lagged) those El Niño periods by six months to accommodate the lag between NINO3.4 SST anomalies and the response of the South Atlantic-Indian-West Pacific Oceans, then deleted the South Atlantic-Indian-West Pacific values that correspond to those significant El Niño events.  I then averaged the South Atlantic-Indian-West Pacific Oceans sea surface temperature anomalies between those El Niño-related gaps. You’ll note I’ve ended the updates for the period after the 2009-10 El Niño.  That was done to accommodate the expected response to the 2015/16 El Niño. The Sea Surface Temperature anomalies of the East Pacific Ocean, or approximately 33% of the surface area of the global oceans, have shown comparatively little long-term warming since 1982 based on the linear trend. And between upward shifts, the Sea Surface Temperature anomalies for the South Atlantic-Indian-West Pacific subset (about 52.5% of the global ocean surface area) remain relatively flat, though they actually cool slightly.  Anthropogenic forcings are said to be responsible for most of the rise in global surface temperatures over this period, but the Sea Surface Temperature anomaly graphs of those regions discussed above prompt a two-part question: Since 1982, what anthropogenic global warming processes would overlook the sea surface temperatures of 33% of the global oceans and have an impact on the other 52% but only during the months of the significant El Niño events of 1986/87/88, 1997/98 and 2009/10? They were also discussed in great detail in my recently published book Who Turned on the Heat? The Unsuspected Global Warming Culprit, El Niño-Southern Oscillation. See the blog post Everything You Every Wanted to Know about El Niño and La Niña… for an overview. It’s now free.  Click here for a copy. STANDARD NOTE ABOUT THE REYNOLDS OI.V2 COMPOSITE The MONTHLY graphs illustrate raw monthly OI.v2 sea surface temperature anomalies from November 1981 to October 2016, as it is presented by the KNMI Climate Explorer.  While NOAA uses the base years of 1971-2000 for this product, those base years cannot be used at the KNMI Climate Explorer because they extend before the start year of the product. (NOAA had created a special climatology for the Reynolds OI.v2 product.) I’ve referenced the anomalies to the period of 1981 to 2010, which is actually 1982 to 2010 for most months. MONTHLY INDIVIDUAL OCEAN AND HEMISPHERIC SEA SURFACE TEMPERATURE UPDATES (5) Northern Hemisphere Sea Surface Temperature (SST) Anomalies Monthly Change = -0.05 deg C #################################### (6) Southern Hemisphere Sea Surface Temperature (SST) Anomalies Monthly Change = +0.03 deg C #################################### (7) North Atlantic Sea Surface Temperature (SST) Anomalies (0 to 70N, 80W to 0) Monthly Change = -0.04 deg C #################################### (8) South Atlantic Sea Surface Temperature (SST) Anomalies (0 to 60S, 70W to 20E) Monthly Change = -0.01 deg C #################################### (9) Pacific Sea Surface Temperature (SST) Anomalies (60S to 65N, 120E to 80W) Monthly Change = -0.01 Deg C #################################### (10) North Pacific Sea Surface Temperature (SST) Anomalies (0 to 65N, 100E to 90W) Monthly Change = -0.07 Deg C #################################### (11) South Pacific Sea Surface Temperature (SST) Anomalies (0 to 60S, 120E to 70W) Monthly Change = +0.05 deg C #################################### (12) Indian Ocean Sea Surface Temperature (SST) Anomalies (60S to 30N, 20E to 120E) Monthly Change = +0.03 deg C #################################### (13) Arctic Ocean Sea Surface Temperature (SST) Anomalies (65N to 90N) Monthly Change = +0.01 deg C #################################### (14) Southern Ocean Sea Surface Temperature (SST) Anomalies (90S-60S) Monthly Change = +0.06 deg C #################################### WEEKLY SEA SURFACE TEMPERATURE ANOMALIES Weekly NINO3.4 sea surface temperature anomalies are at -0.9 deg C, which is below (“cooler” than) the threshold of La Niña conditions.  Based on the weekly Reynolds OI.v2 data, they peaked at a higher anomaly than the 1997/98 El Niño.  But as discussed in the post Is the Current El Niño Stronger Than the One in 1997/98?, the 1997/98 El Niño was a stronger East Pacific El Niño than the one taking place now. If you’d like to argue that this is the strongest El Niño EVER, also see the post The Differences between Sea Surface Temperature Datasets Prevent Us from Knowing Which El Niño Was Strongest According NINO3.4 Region Temperature Data. (15) Weekly NINO3.4 Sea Surface Temperature (SST) Anomalies You’ll note that I included a comparison of the evolutions of the NINO3.4 sea surface temperature anomalies for the 1997/98 and 2015/16 El Niños.  Just wanted to show that the transition this year toward La Niña has lagged behind the transition in 1998. Note:  I’ve used the weekly NINO3.4 values available from the NOAA/CPC Monthly Atmospheric & SST Indices webpage, specifically the listing here. #################################### MODEL-DATA COMPARISON:  To counter the nonsensical “Just what AGW predicts” rantings of alarmists about the “record-high” global sea surface temperatures in 2014 and 2015, I’ve added a model-data comparison of satellite-era global sea surface temperatures to these monthly updates.  See the example below.  The models are represented the multi-model ensemble-member mean of the climate models stored in the CMIP5 archive, which was used by the IPCC for their 5th Assessment Report. For further information on the use of the model mean, see the post here. For most models, historic forcings run through 2005 (2012 for others) and the middle-of-the-road RCP6.0 forcings are used after in this comparison.  The data are represented by NOAA’s Optimum Interpolation Sea Surface Temperature data, version 2—a.k.a. Reynolds OI.v2—which is NOAA’s best.   The model outputs and data have been shifted so that their trend lines begin at “zero” anomaly for the (November, 1981) start month of this composite. That “zeroing” helps to highlight how poorly the models simulate the warming of the ocean surfaces…noticeably higher than the observed warming rate.  Both the Reynolds OI.v2 values and the model outputs of their simulations of sea surface temperature (TOS) are available to the public at the KNMI Climate Explorer. 000 – Model-Data Comparison #################################### THE BLOB We discussed the demise of the short-term reappearance of The Blob in the post THE BLOB has Dissipated.  I’ll continue to present the sea surface temperature anomalies for The Blob region for one more month to show the enormous decline there since September. (16) The Blob (40N-50N, 150W-130W) Monthly Change = -1.19 deg C Note: I’ve changed the coordinates for The Blob to 40N-50N, 150W-130W to agree with those used in the NOAA/NCEP Monthly Ocean Briefing. I had been using the coordinates of 35N-55N, 150W-125W for The Blob. HURRICANE MAIN DEVELOPMENT REGION The sea surface temperatures of the tropical North Atlantic are one of the primary factors that contribute to the development and maintenance of hurricanes.  I’ve recently added to the update the sea surface temperatures and anomalies for the hurricane main development region of the North Atlantic. It is often represented by the coordinates of 10N-20N, 80W-20W.  While hurricanes tend to form there, they can also form outside it.  While sea surfaces for the main development region are warmer tha 8 Nov
NASA Webb Telescope Structure is Sound After Vibration Testing Detects Anomaly - The 18-segment gold coated primary mirror of NASA’s James Webb Space Telescope is raised into vertical alignment in the largest clean room at the agency’s Goddard Space Flight Center in Greenbelt, Maryland, on Nov. 2, 2016. The secondary mirror mount booms are folded down into stowed for launch configuration. Credit: Ken Kremer/kenkremer.com NASA GODDARD SPACE FLIGHT CENTER, MD – The James Webb Space Telescope (JWST) is now deemed “sound” and apparently unscathed, engineers have concluded, based on results from a new batch of intensive inspections of the observatory’s structure, after concerns were raised in early December when technicians initially detected “anomalous readings” during a preplanned series of vibration tests, NASA announced Dec. 23. After conducting both “visual and ultrasonic examinations” at NASA’s Goddard Space Flight Center in Maryland, engineers have found it to be safe at this point with “no visible signs of damage.” But because so much is on the line with NASA’s $8.8 Billion groundbreaking Webb telescope mission that will peer back to nearly the dawn of time, engineers are still investigating the “root cause” of the “vibration anomaly” first detected amidst shake testing on Dec. 3. “The team is making good progress at identifying the root cause of the vibration anomaly,” NASA explained in a Dec 23 statement, much to everyone’s relief. “They have successfully conducted two low level vibrations of the telescope.” “All visual and ultrasonic examinations of the structure continue to show it to be sound.” Starting late November, technicians began a defined series of environmental tests including vibration and acoustics tests to make sure that the telescopes huge optical structure was fit for blastoff and could safely withstand the powerful shaking encountered during a rocket launch and the especially harsh rigors of the space environment. To carry out the vibration and acoustics tests conducted on equipment located in a shirtsleeve environment, the telescope structure was first carefully placed inside a ‘clean tent’ structure to protect it from dirt and grime and maintain the pristine clean room conditions available inside Goddard’s massive clean room where it has been undergoing assembly for the past year. NASA’s James Webb Space Telescope placed inside a “clean tent” in Nov. 2016 to protect it from dust and dirt as engineers at NASA’s Goddard Space Flight Center in Greenbelt, Maryland transport it out of the relatively dust-free cleanroom and into a shirtsleeve environment to conduct vibration and acoustics tests to confirm it is fit for launch in 2018.  Credit: NASA/Chris Gunn The Webb telescope will launch on an Ariane V booster from the Guiana Space Center in Kourou, French Guiana in 2018. “The James Webb Space Telescope is undergoing testing to make sure the spacecraft withstands the harsh conditions of launch, and to find and remedy all possible concerns before it is launched from French Guiana in 2018.” However they soon discovered unexpected “anomalous readings” during a shake test of the telescope on Dec. 3, as the agency initially announced in a status update on the JWST accelerometers attached to the telescope detected during a particular test. The anomalous readings were found during one of the vibration tests in progress on the shaker table, via accelerometers attached to the observatories optical structure known as OTIS. “During the vibration testing on December 3, at Goddard Space Flight Center in Greenbelt, Maryland, accelerometers attached to the telescope detected anomalous readings during a particular test,” the team elaborated. So the team quickly conducted further “low level vibration” tests and inspections to more fully understand the nature of the anomaly, as well as scrutinize the accelerometer data for clues. “Further tests to identify the source of the anomaly are underway. The engineering team investigating the vibe anomaly has made numerous detailed visual inspections of the Webb telescope and has found no visible signs of damage.” “They are continuing their analysis of accelerometer data to better determine the source of the anomaly.” The team is measuring and recording the responses of the structure to the fresh low level vibration tests and will compare these new data to results obtained prior to detection of the anomaly. Work continues over the holidays to ensure Webb is safe and sound and can meet its 2018 launch target. After thoroughly reviewing all the data the team hope to start the planned vibration and acoustic testing in the new year. “Currently, the team is continuing their analyses with the goal of having a review of their findings, conclusions and plans for resuming vibration testing in January.” Webb’s massive optical structure being tested is known as OTIS or Optical Telescope element and Integrated Science. It includes the fully assembled 18-segment gold coated primary mirror and the science instrument module housing the four science instruments OTIS is a combination of the OTE (Optical Telescope Assembly) and the ISIM (Integrated Science Instrument Module) together. “OTIS is essentially the entire optical train of the observatory!” said John Durning, Webb Telescope Deputy Project Manager, in an earlier exclusive interview with Universe Today at NASA’s Goddard Space Flight Center. “It’s the critical photon path for the system.” The components were fully integrated this past summer at Goddard. The combined OTIS entity of mirrors, science module and backplane truss weighs 8786 lbs (3940 kg) and measures 28’3” (8.6m) x 8”5” (2.6 m) x 7”10“ (2.4 m). The environmental testing is being done at Goddard before shipping the huge structure to NASA’s Johnson Space Center in February 2017 for further ultra low temperature testing in the cryovac thermal vacuum chamber. The 6.5 meter diameter ‘golden’ primary mirror is comprised of 18 hexagonal segments – looking honeycomb-like in appearance. And it’s just mesmerizing to gaze at – as I had the opportunity to do on a few occasions at Goddard this past year – standing vertically in November and seated horizontally in May. Each of the 18 hexagonal-shaped primary mirror segments measures just over 4.2 feet (1.3 meters) across and weighs approximately 88 pounds (40 kilograms). They are made of beryllium, gold coated and about the size of a coffee table. All 18 gold coated primary mirrors of NASA’s James Webb Space Telescope are seen fully unveiled after removal of protective covers installed onto the backplane structure, as technicians work inside the massive clean room at NASA’s Goddard Space Flight Center in Greenbelt, Maryland on May 3, 2016. The secondary mirror mount booms are folded down into stowed for launch configuration. Credit: Ken Kremer/kenkremer.com The Webb Telescope is a joint international collaborative project between NASA, the European Space Agency (ESA) and the Canadian Space Agency (CSA). Webb is designed to look at the first light of the Universe and will be  able to peer back in time to when the first stars and first galaxies were forming. It will also study the history of our universe and the formation of our solar system as well as other solar systems and exoplanets, some of which may be capable of supporting life on planets similar to Earth. Watch this space for my ongoing reports on JWST mirrors, science, construction and testing. Stay tuned here for Ken’s continuing Earth and Planetary science and human spaceflight news. Ken Kremer The post NASA Webb Telescope Structure is Sound After Vibration Testing Detects Anomaly appeared first on Universe Today.15:35
What My Dog Taught Me About Time and Space - Sammy and her namesake, Sirius the Dog Star, on a winter night. Photos by the author Like many of you, I’m the owner of a furry Canis Major. Her name is Sammy. We always thought she was mostly border collie, but my daughter gifted me with a doggie DNA kit a few years back, and now we know with scientific certainty that she’s a mix of German shepherd, Siberian husky and golden retriever. Yeah, she’s a mutt. Sammy’s going on 17 years old now — that’s human years — and has neither the spunk nor bladder control of a young pup. She wanders, paces, gets confused. In her aging, I see what’s in store for all of us as we pass from one stage of life to the next. Intentionally or not, we humans often leave a legacy before we depart. Maybe a big building, a work of art or an exemplary life. As I stare down at my panting dog, it occurs that she’s leaving a legacy too, one she’s completely unaware of but which I’ll always appreciate. Thanks to my dog I’ve seen more auroras and lunar halos that I can count. That goes for meteors, contrails, space station passes, light pillars and moonrises, too. All this because she needs to be walked in the early morning and again at night. This simple act ensures that while Sammy sniffs and marks, I get to spend at least 20 minutes under the sky. Nearly every night of the year. Warm under her thick coat, she’s not bothered by the snow. I’m an amateur astronomer and keep tabs on what’s up, but my dog makes sure I don’t ignore the sky. Let’s say she keeps me honest. There’s no avoiding going out or I’ll pay for it in whimpering and cleanup. There were times I wouldn’t be aware an aurora was underway until it was time to walk the dog. When we were done, I’d dash away to a dark sky with camera and tripod. Other nights, walking the dog would alert me to a sudden clearing and the opportunity to catch a variable star on the rise or see a newly discovered comet for the first time. Thanks Sammy. Amateur astronomers are familiar with eternity. We routinely observe stars and galaxies by eye and telescope that remind us of both the vastness of space and the aching expanse of time. I have only so many years left before I spend the next 10 billion years disassembled and strewn about like that scarecrow attacked by flying monkeys. But when I see the Sombrero Galaxy through my telescope, with its 29-million-year-old photons setting off tiny explosions in my retinas, I get a taste of eternity in the here and now. That’s where Sammy offers yet another pearl. Dogs are far better living in the moment than people are. They can eat the same food twice a day for a decade and relish it anew every single time. Same goes for their excitement at seeing their owner or taking a walk or a million other ways they reveal that this moment is what counts. The famous Sombrero galaxy (M104) is a bright nearby spiral galaxy. The prominent dust lane and halo of stars and globular clusters give this galaxy its name. Credit: NASA/ESA and The Hubble Heritage Team (STScI/AURA) People tend to think of eternity as encompassing all of time, but Sammy has a different take. A moment fully experienced feels like it might never end. Lose yourself in the moment, and the clock stops ticking. I love that feeling. That’s how my dog’s been living all along. Canine wisdom: one billion years = one moment. Both feel like forever. Sammy’s lost much of her hearing and some of her eyesight. We’re not sure how long she has. Maybe a few months, maybe even another year, but her legacy is clear. She’s been a great pet and teacher even if she never figured out how to fetch. We’ve hiked hard trails together and then rested atop precipices with the sun sinking in the west. I look into her clouded eyes these days and have to speak up when I call her name, but she’s been and remains a “Good dog!” The post What My Dog Taught Me About Time and Space appeared first on Universe Today.10:29
Comet U1 NEOWISE: A Possible Binocular Comet? - Comet C/2016 U1 NEOWISE on December 23rd as seen from Jauerling, Austria. Image credit: Michael Jäger. Well, it looks like we’ll close out 2016 without a great ‘Comet of the Century.’ One of the final discoveries of the year did, however, grab our attention, and may present a challenging target through early 2017: Comet U1 NEOWISE. Comet C/2016 U1 NEOWISE is expected to reach maximum brightness during the second week on January. Discovered by the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) space observatory on its extended mission on October 21st, 2016, Comet U1 NEOWISE orbits the Sun on an undefined hyperbolic orbit that is perhaps millions on years long. This also means that this could be Comet C/2016 U1 NEOWISE’s first venture through the inner solar system. Comet C/2016 U1 NEOWISE is set to break binocular +10th magnitude brightness this week, and may just top +6th magnitude (naked eye brightness) in mid-January near perihelion. The orbit of Comet U1 NEOWISE. Credit: NASA/JPL. Visibility prospects: At its brightest, Comet C/2016 U1 NEOWISE will pass through the constellations Ophiuchus to Serpens Cauda and Sagittarius, and is best visible in the dawn sky 12 degrees from the Sun at maximum brightness. This apparition favors the northern hemisphere. Perihelion for Comet C/2016 U1 NEOWISE occurs on January 13th, 2017 at 0.319 AU from the Sun, and the comet passed 0.709 AU from the Earth on December 13th. This is the ninth comet discovered by the extended NEOWISE mission since 2014. The pre-dawn view on the morning of December 28th. Image credit: Starry Night. Comet C/2016 U1 NEOWISE ends 2016 and early January 2017 as a difficult early dawn target, sitting 25 degrees above the eastern horizon as seen from latitude 30 degrees north about 30 minutes before dawn. Things will get much more difficult from there, as the comet passes just 12 degrees from the Sun as seen from our Earthly vantage point during the final week of January. The comet sits 16 degrees from the Sun in the southern hemisphere constellation of Microscopium on the final day of January, though it is expected to shine at only +10th magnitude at this point, favoring observers in the southern hemisphere. The time to try to catch a brief sight of Comet C/2016 U1 NEOWISE is now. Recent discussions among comet observers suggest that the comet may be slowing down in terms of brightness, possibly as a prelude to a pre-perihelion breakup. Keep a eye on the Comet Observer’s database (COBS) for the latest in cometary action as reported and seen by actual observers in the field. Finding C/2016 U1 NEOWISE will be a battle between spying an elusive fuzzy low-contrast coma against a brightening twilight sky. Sweep the suspect area with binoculars or a wide-field telescopic view if possible. The path of Comet U1 NEOWISE through perihelion on January 13th. Credit: Starry Night. Here are some key dates to watch out for in your quest: December 25-Crosses in to Ophiuchus. 26-Passes near +3 mag Kappa Ophiuchi. January 1-Crosses the celestial equator southward. 3-Passes near M14. 7-Passes near the +3 mag star Nu Ophiuchi. 8-Crosses into the constellation Serpens Cauda. 10-Passes near M16, the Eagle Nebula. 11-Passes near M17 the Omega Nebula, crosses the galactic equator southward. 12-Crosses into the constellation Sagittarius. 13-Passes near M25. 16-Crosses the ecliptic southward. 27-Crosses into the constellation Microscopium. 28-Passes near +4.8 mag star Alpha Microscopii. February 1-May drop back below +10 magnitude. C/2016 U1 NEOWISE (23.nov.2016) from Oleg Milantiev on Vimeo. A rundown on comets in 2016, a look ahead at 2017 C/2016 U1 NEOWISE was one of 50 comets discovered in 2016. Notables for the year included C/2013 X1 PanSTARRS, 252/P LINEAR and C/2013 US10 Catalina. What comets are we keeping an eye on in 2017? Well, Comet 2/P Encke, 41P/Tuttle-Giacobini-Kresak, C/2015 ER61 PanSTARRS, C/2015 V2 Johnson are all expected to reach +10 magnitude brightness in the coming year… and Comet 45P/Honda-Mrkos-Pajdušáková has already done so, a bit ahead of schedule. These are all broken down in our forthcoming guide to the top 101 Astronomical Events for 2017. Again, there’s no great naked eye comet on the horizon (yet), but that all could change… 2017 owes us one! The post Comet U1 NEOWISE: A Possible Binocular Comet? appeared first on Universe Today.09:08
Messier 29 – The NGC 6913 Open Star Cluster - Welcome back to Messier Monday! In our ongoing tribute to the great Tammy Plotner, we take a look at the open star cluster known as Messier 29. Enjoy! During the 18th century, famed French astronomer Charles Messier noted the presence of several “nebulous objects” in the night sky. Having originally mistaken them for comets, he began compiling a list of them so that others would not make the same mistake he did. In time, this list would come to include 100 of the most fabulous objects in the night sky. One of these objects is Messier 29, an open star cluster located in the northern skies in the direction of the Cygnus constellation. Situated in a highly crowded area of the Milky Way Galaxy, about 4,000 light-years from Earth, this star cluster is slowly moving towards us. Though somewhat isolated in the night sky, it can be easily spotted using binoculars and small telescopes. Description: While Messier Object 29 might appear a little bit boring compared to some of its more splashy catalog companions, it really isn’t. This little group of stars is part of the Cygnus OB1 association which just happens to be heading towards us at a speed of 28 kilometers per second (17.4 mps) . If it weren’t obscured by Milky Way dust, the light of its stars would be 1000 times brighter! Messier 29 and Gamma Cygni (Sadr). Credit: Wikisky All in all, M29 has around 50 member stars, but this 10 million year old star cluster still has some surprises. The five brightest stars you see are are all giant stars of spectral class B0, and if we were to put one next to our own Sol, it would shine 160,000 times brighter. Image just how “lit up” any planet might be that would reside inside that 11 light year expanse! Astronomers were curious about Messier 29, too, so they went in search of binary stars. As C. Boeche (et al) wrote in a 2003 study: “Between 1996 and 2003 we obtained 226 high resolution spectra of 16 stars in the field of the young open cluster NGC 6913, to constrain its main properties and study its internal kinematics. Twelve of the program stars turned out to be members, one of them probably unbound. Nine are binaries (one eclipsing and another double lined) and for seven of them the observations allowed us to derive the orbital elements. All but two of the nine discovered binaries are cluster members. In spite of the young age (a few Myr), the cluster already shows signs that could be interpreted as evidence of dynamical relaxatin and mass segregation. “However, they may be also the result of an unconventional formation scenario. The dynamical (virial) mass as estimated from the radial velocity dispersion is larger than the cluster luminous mass, which may be explained by a combination of the optically thick interstellar cloud that occults part of the cluster, the unbound state or undetected very wide binary orbit of some of the members that inflate the velocity dispersion and a high inclination for the axis of possible cluster angular momentum. All the discovered binaries are hard enough to survive average close encounters within the cluster and do not yet show signs of relaxation of the orbital elements to values typical of field binaries.” So why is finding binary stars important? Evolution is the solution, the hunt for Be stars. As S.L. Malchenko of the Crimean Astrophysical Observatory wrote in a 2008 study on Be stars: “The phenomenon of Be stars has been known for over a century. The fact that at least 20% of B stars have an emission spectrum supports that the definition that this phenomenon is not special but it is rather typical from a large group of objects at a certain stage of evolution. The vagueness of the concept of the Be phenomenon suggests that this definition encompasses a broad group of objects near the main sequence that includes binary systems with different rate of mass exchange. This young open cluster in the Cyg OB1 association, is also know as M29, contains a large number of luminous stars with spectral types around B0. An extreme variation of extinction is found across the young open cluster NGC 6913, extinction in the cluster center is relatively homogeneous, but very large. We observed 10 spectra for 7 B stars and one known Be star in the blue region.” Close-up of the core region of Messier 29. Credit: Adam Block/Mount Lemmon SkyCenter/University of Arizona Although you won’t be able to detect it visually, there is also some nebulosity associated with M29, which is another important clue to this star cluster’s evolution. As B. Bhavya of Cochin University of Science and Technology wrote in a 2008 study: “The Cygnus region is a region of recent star formation activity in the Milky Way and is rich in massive early type stars concentrated in OB associations. The presence of nebulosity and massive stars indicate that the stars have been forming till very recently and the young clusters found here are the result of the recent star formation event. Though the above fact is known, what is not known is that when this star formation process started and how it proceeded in the region. Though one assumes that all the stars in a cluster have the same age, this assumption is not valid when the candidate cluster is very young. In the case of young clusters, there is a chance for a spread in the age of the stars, depending on the duration of star formation. An estimation of this formation time-scale in the clusters formed in a star forming complex, will indicate the duration of star formation and its direction of propagation within the complex. In principle, duration of star formation is defined as the difference between the ages of the oldest and the youngest star formed in the cluster. In practice, the age of the oldest star is assumed as the age of that star which is about to turn-off from the main-sequence (MS) (turn-off age) and the age of the youngest star is the age of the youngest pre-MS star (turn-on age). The turn-off age of many clusters are known, but the turn-on age is not known for most of the clusters.” History of Observation: This cool little star cluster was an original discovery of Charles Messier, who first observed it in 1764. As he wrote of the object in his notes at the time: “In the night of July 29 to 30, 1764, I have discovered a cluster of six or seven very small stars which are below Gamma Cygni, and which one sees with an ordinary refractor of 3 feet and a half in the form of a nebula. I have compared this cluster with the star Gamma, and I have determined its position in right ascension as 303d 54′ 29″, and its declination of 37d 11′ 57″ north.” Gammy Cygni (the brightest object in the center) and neighboring regions. Credit: Wikipedia Commons/Erik Larsen In the case of this cluster, it was independently recovered again by Caroline Herschel, who wrote: “About 1 deg under Gamma Cygni; in my telescope 5 small stars thus. My Brother looked at them with the 7 ft and counted 12. It is not in Mess. catalogue.” William would also return to the cluster as well with his own observations: “Is not sufficiently marked in the heavens to deserve notice, as 7 or 8 small stars together are so frequent about this part of the heavens that one might find them by hundreds.” So why the confusion? In this circumstance, perhaps Messier was a bit distracted, for it would appear that his logged coordinates were somewhat amiss. Leave it to Admiral Symth to set the records straight: “A neat but small cluster of stars at the root of the Swan’s neck, and in the preceding branch of the Milky Way, not quite 2deg south of Gamma; and preceding 40 Cygni, a star of the 6th magnitude, by one degree just on the parallel. In the sp [south preceding, SW] portion are the two stars here estimated as double, of which A is 8, yellow; B 11, dusky. Messier discovered this in 1764; and though his description of it is very fair, his declination is very much out: worked up for my epoch it would be north 37d 26′ 15″. But one is only surprised that, with his confined methods and means, so much was accomplished.” Kudos to Mr. Messier for being able to distinguish a truly related group of stars in a field of so many! Take the time to enjoy this neat little grouping for yourself and remember – it’s heading our way. Locating Messier 29: Finding M29 in binoculars or a telescope is quite easy once you recognize the constellation of Cygnus. Its cross-shape is very distinctive and the marker star you will need to locate this open star cluster is Gamma – bright and centermost. For most average binoculars, you will only need to aim at Gamma and you will see Messier 29 as a tiny grouping of stars that resembles a small box. The location of Messier 29, in the direction of the Cygus constellation. Credit: IAU and Sky & Telescope magazine (Roger Sinnott & Rick Fienberg) For a telescope, begin with your finderscope on Gamma, and look for your next starhop marker star about a finger width southwest. Once this star is near the center of your finderscope field, M29 will also be in a low magnification eyepiece field of view. Because it is a very widely spaced galactic open star cluster that only consists of a few stars, it makes an outstanding object that stands up to any type of sky conditions. Except, of course, clouds! Messier 29 can easily be seen in light polluted areas and during a full Moon – making it a prize object for study for even the smallest of telescopes. As always, here are the quick facts to help you get started: Object Name: Messier 29Alternative Designations: M29, NGC 6913Object Type: Open Galactic Star ClusterConstellation: CygnusRight Ascension: 20 : 23.9 (h:m)Declination: +38 : 32 (deg:m)Distance: 4.0 (kly)Visual Brightness: 7.1 (mag)Apparent Dimension: 7.0 (arc min) We have written many interesting articles about Messier Objects here at Universe Today. Here’s Tammy Plotner’s Introduction to the Messier Objects, , M1 – The Crab Nebula, M8 – The Lagoon Nebula, and David Dickison’s articles on the 2013 and 2014 Messier Marathons. Be to sure to check out our complete Messier Catalog. And for more information, check out the SEDS Messier Database. Sources: Messier Objects – Messier 29 SEDS Messier Database – Messier 29 Wikipedia – Messier 29 The post Messier 29 – The NGC 6913 Open Star Cluster appeared first on Universe Today.26 Dec
Merry Christmas From Space 2016 - All six members of the Expedition 50 crew aboard the International Space Station celebrated the holidays together with a festive meal on Christmas Day, Dec. 25, 2016. Image Credit: NASA As we celebrate the Christmas tidings of 2016 here on Earth, a lucky multinational crew of astronauts and cosmonauts celebrate the festive season floating in Zero-G while living and working together in space aboard the Earth orbiting International Space Station (ISS) complex – peacefully cooperating to benefit all humanity. Today, Dec. 25, 2016, the six person Expedition 50 crew of five men and one woman marked the joyous holiday of Christ’s birth by gathering for a festive meal in space – as billions of Earthlings celebrated this Christmas season of giving, remembrance and peace to all here on our home planet. This year is an especially noteworthy Space Christmas because it counts as Expedition 50. This is the 50th crew to reside on board since the space station began operating with permanent occupancy by rotating crews all the way back to 1998. The Expedition 50 crew currently comprises of people from three nations supporting the ISS – namely the US, Russia and France; Commander Shane Kimbrough from NASA and flight engineers Andrey Borisenko (Roscosmos), Sergey Ryzhikov (Roscosmos), Thomas Pesquet (ESA), Peggy Whitson (NASA), and Oleg Novitskiy (Roscosmos). Here a short video of holiday greetings from a trio of crew members explaining what Christmas in Space means to them: Video Caption: Space Station Crew Celebrates the Holidays Aboard the Orbital Lab. Aboard the International Space Station, Expedition 50 Commander Shane Kimbrough and Peggy Whitson of NASA and Thomas Pesquet of the European Space Agency discussed their thoughts about being in space during the holidays and how they plan to celebrate Christmas and New Year’s in a downlink. Credit: NASA “Hello from the Expedition 50 Crew! We’d like to share what Christmas means to us,” said Expedition 50 Commander Shane Kimbrough. “For me it’s a lot about family,” said Expedition 50 Commander Shane Kimbrough. “We always travel to meet up with our family which is dispersed across the country. And we go home to Georgia and Florida … quite abit to meet up. Always a great time to get together and share with each other.” “Although its typically thought of a season to get things, we in our family think about the giving aspect. Giving of our many talents and resources. Especially to those less fortunate.” Kimbrough arrived on the complex in October, followed a month later by Whitson and Pesquet in November. They were all launched aboard Russian Soyuz capsules from the Baikonur Cosmodrome in Kazakhstan. Aboard the International Space Station, Expedition 50 Flight Engineer Peggy Whitson of NASA sent holiday greetings and festive imagery from the cupola on Dec. 18, 2016. Credit: NASA. And Peggy Whitson especially has a lot to celebrate in space! Because not only is Whitson currently enjoying her third long-duration flight aboard the station – as an Expedition 50 flight engineer. Soon she will become the first woman to command the station twice ! That momentous event happens when she assumes the role of Space Station Commander early in 2017 during the start Expedition 51. Aboard the International Space Station, Expedition 50 Flight Engineer Peggy Whitson of NASA sent holiday greetings and festive imagery from the Japanese Kibo laboratory module on Dec. 18, 2016. Credit: NASA Stay tuned here for Ken’s continuing Earth and Planetary science and human spaceflight news. Ken Kremer NASA astronaut Peggy Whitson floats through the Unity module aboard the International Space Station. On her third long-duration flight aboard the station, Whitson will become the first woman to command the station twice when she assumes the role during Expedition 51. Credit: NASA The post Merry Christmas From Space 2016 appeared first on Universe Today.25 Dec
See a Christmas-Time Binocular Comet: 45P/Honda-Mrkos-Pajdusakova - Comet 45P/Honda-Mrkos-Pajdusakova captured in its glory on Dec. 22, 2016. It displays a bright, well-condensed blue-green coma and long ion or gas tail pointing east. Comet observers take note: a Swan Band filter shows a larger coma and increases the comet’s contrast. Credit: Gerald Rhemann Merry Christmas and Happy Holidays all! I hope the day finds you in the company of family or friends and feeling at peace. While you’ve been shopping for gifts the past few weeks, a returning comet has been brightening up in the evening sky. Named 45P/Honda-Mrkos-Pajdusakova, it returns to the hood every 5.25 years after vacationing beyond the planet Jupiter. It’s tempting to blow by the name and see only a jumble of letters, but let’s try to pronounce it: HON-da — MUR-Koz — PIE-doo-sha-ko-vah. Not too hard, right? Tonight, the comet will appear about 12. 5 degrees to the west of Venus in central Capricornus. You can spot it near the end of evening twilight. Use larger binoculars or a telescope. Stellarium Comet 45P is a short period comet — one with an orbital period of fewer than 200 years — discovered on December 3, 1948 by Minoru Honda along with co-discoverers Antonin Mrkos and Ludmila Pajdusakova. Three names are the maximum a comet can have even if 15 people simultaneously discover it. 45P has a history of brightening rapidly as it approaches the sun, and this go-round is proof. A faint nothing a few weeks back, the comet’s now magnitude +7.5 and visible in 50mm or larger binoculars from low light pollution locations. You can catch it right around the end of dusk this week and next as it arcs across central Capricornus not far behind the brilliant planet Venus. 45P will look like a dim, fuzzy star in binoculars, but if you can get a telescope on it, you’ll see a fluffy, round coma, a bright, star-like center and perhaps even a faint spike of a tail sticking out to the east. Time exposure photos reveal a tail at least 3° long and a gorgeous, aqua-tinted coma. I saw the color straight off when observing the comet several nights ago in my 15-inch reflector at low power (64x). Use this map to help you follow the comet night to night. Tick marks start this evening (Dec. 25) and show its nightly position through Jan. 8 around 6 p.m. local time or about an hour and 15 minutes after sunset. Venus, at upper left, is shown through the 28th. Click the chart for a larger version you can save and print out for use at your telescope. Created with Chris Marriott’s SkyMap software Right now, and for the remainder of its evening apparition, 45P will never appear very high in the southwestern sky. Look for it a little before the end of evening twilight, when the sky is reasonably dark and the comet is as high as it gets — about a fist above the horizon as seen from mid-northern latitudes. That’s pretty low, so make the best of your time. I recommend you being around 1 hour 15 minutes after sunset. The further south you live, the higher 45P will appear. To a point. It hovers low at nightfall this month and next. That will change in February when the comet pulls away from the sun and makes a very close approach to the Earth while sailing across the morning sky. How about a helping hand? On New Year’s Eve, the 2-day-old crescent Moon will be just a few degrees from 45P. This simulation shows the view through 50mm or larger binoculars with an ~6 degree field of view for the Central time zone. Map: Bob King, Source: Stellarium 45P reaches perihelion or closest distance to the sun on Dec. 31 and will remain visible through about Jan. 15 at dusk. An approximately 2-week hiatus follows, when it’s lost in the twilight glow. Then in early February, the comet reappears at dawn and races across Aquila and Hercules, zipping closest to Earth on Feb. 11 at a distance of only 7.7 million miles. During that time, we may even be able to see this little fuzzball with the naked eye; its predicted magnitude of +6 at maximum is right at the naked eye limit. Even in suburban skies, it will make an easy catch in binoculars then. I’ll update with new charts as we approach that time. For now, enjoy the prospect of ‘opening up’ this cometary gift as the last glow of dusk subsides into night. The post See a Christmas-Time Binocular Comet: 45P/Honda-Mrkos-Pajdusakova appeared first on Universe Today.25 Dec
The Canis Minor Constellation - Welcome back to Constellation Friday! Today, in honor of the late and great Tammy Plotner, we will be dealing with the “little dog” – the Canis Minor constellation! In the 2nd century CE, Greek-Egyptian astronomer Claudius Ptolemaeus (aka. Ptolemy) compiled a list of all the then-known 48 constellations. This treatise, known as the Almagest, would used by medieval European and Islamic scholars for over a thousand years to come, effectively becoming astrological and astronomical canon until the early Modern Age. One of these constellations was Canis Minor, a small constellation in the northern hemisphere. As a relatively dim collection of stars, it contains only two particularly bright stars and only faint Deep Sky Objects. Today, it is one of the 88 constellations recognized by the International Astronomical Union, and is bordered by the Monoceros, Gemini, Cancer and Hydra constellation. Name and Meaning: Like most asterisms named by the Greeks and Romans, the first recorded mention of this constellation goes back to ancient Mesopotamia. Specifically, Canis Minor’s brightest stars – Procyon and Gomeisa – were mentioned in the Three Stars Each tablets (ca. 1100 BCE), where they were referred to as MASH.TAB.BA (or “twins”). The Winter Hexagon, which contains parts of the Auriga, Canis Major,  Canis Minor, Gemini, Monoceros, Orion, Taurus, Lepus and Eridanus constellations. Credit: constellation-guide.com In the later texts that belong to the MUL.APIN, the constellation was given the name DAR.LUGAL (“the star which stands behind it”) and represented a rooster. According to ancient Greco-Roman mythology, Canis Minor represented the smaller of Orion’s two hunting dogs, though they did not recognize it as its own constellation. In Greek mythology, Canis Minor is also connected with the Teumessian Fox, a beast turned into stone with its hunter (Laelaps) by Zeus. He then placed them in heaven as Canis Major (Laelaps) and Canis Minor (Teumessian Fox). According to English astronomer and biographer of constellation history Ian Ridpath: “Canis Minor is usually identified as one of the dogs of Orion. But in a famous legend from Attica (the area around Athens), recounted by the mythographer Hyginus, the constellation represents Maera, dog of Icarius, the man whom the god Dionysus first taught to make wine. When Icarius gave his wine to some shepherds for tasting, they rapidly became drunk. Suspecting that Icarius had poisoned them, they killed him. Maera the dog ran howling to Icarius’s daughter Erigone, caught hold of her dress with his teeth and led her to her father’s body. Both Erigone and the dog took their own lives where Icarius lay. “Zeus placed their images among the stars as a reminder of the unfortunate affair. To atone for their tragic mistake, the people of Athens instituted a yearly celebration in honour of Icarius and Erigone. In this story, Icarius is identified with the constellation Boötes, Erigone is Virgo and Maera is Canis Minor.” Canis Minor, as depicted by Johann Bode in his 1801 work Uranographia. Credit: Wikipedia Commons/Alessio Govi To the ancient Egyptians, this constellation represented Anubis, the jackal god. To the ancient Aztecs, the stars of Canis Minor were incorporated along with stars from Orion and Gemini into as asterism known as “Water”, which was associated with the day. Procyon was also significant in the cultural traditions of the Polynesians, the Maori people of New Zealand, and the Aborigines of Australia. In Chinese astronomy, the stars corresponding to Canis Minor were part of the The Vermilion Bird of the South. Along with stars from Cancer and Gemini, they formed the asterisms known as the Northern and Southern River, as well as the asterism Shuiwei (“water level”), which represented an official who managed floodwaters or a marker of the water level. History of Observation: Canis Minor was one of the original 48 constellations included by Ptolemy in his the Almagest. Though not recognized as its own asterism by the Ancient Greeks, it was added by the Romans as the smaller of Orion’s hunting dogs. Thanks to Ptolemy’s inclusion of it in his 2nd century treatise, it would go on to become part of astrological and astronomical traditions for a thousand years to come. For medieval Arabic astronomers, Canis Minor continued to be depicted as a dog, and was known as “al-Kalb al-Asghar“. It was included in the Book of Fixed Stars by Abd al-Rahman al-Sufi, who assigned a canine figure to his stellar diagram. Procyon and Gomeisa were also named for their proximity to Sirius; Procyon being named the “Syrian Sirius (“ash-Shi’ra ash-Shamiya“) and Gomeisa the “Sirius with bleary eyes” (“ash-Shira al-Ghamisa“). The constellation Canis Minor, shown alongside Monoceros and the obsolete constellation Atelier Typographique. Credit: Library of Congress The constellation was included in Syndey Hall’s Urania’s Mirror (1825) alongside Monoceros and the now obsolete constellation Atelier Typographique. Many alternate names were suggested between the 17th and 19th centuries in an attempt to simplify celestial charts. However, Canis Minor has endured; and in 1922, it became one the 88 modern constellations to be recognized by the IAU. Notable Features: Canis Minor contains two primary stars and 14 Bayer/Flamsteed designated stars. It’s brightest star, Procyon (Alpha Canis Minoris), is also the seventh brightest star in the sky. With an apparent visual magnitude of 0.34, Procyon is not extraordinarily bright in itself. But it’s proximity to the Sun – 11.41 light years from Earth – ensures that it appears bright in the night sky. The star’s name is derived from the Greek word which means “before the dog”, a reference to the fact that it appears to rise before Sirius (the “Dog Star”) when observed from northern latitudes. Procyon is a binary star system, composed of a white main sequence star (Procyon A) and Procyon B, a DA-type faint white dwarf as the companion. Procyon is part of the Winter Triangle asterism, along with Sirius in Canis Major and Betelgeuse in the constellation Orion. It is also part of the Winter Hexagon, along with the stars Capella in Auriga, Aldebaran in Taurus, Castor and Pollux in Gemini, Rigel in Orion and Sirius in Canis Major. The stars of the Winter Triangle and the Winter Hexagon. Credit: constellation-guide.com Next up is Gomeisa, the second brightest star in Canis Minor. This hot, B8-type main sequence star is classified as a Gamma Cassiopeiae variable, which means that it rotates rapidly and exhibits irregular variations in luminosity because of the outflow of matter. Gomeisa is approximately 170 light years from Earth and the name is derived from the Arabic “al-ghumaisa” (“the bleary-eyed woman”). Canis Minor also has a number of Deep Sky Objects located within it, but all are very faint and difficult to observe. The brightest is the spiral galaxy NGC 2485 (apparent magnitude of 12.4), which is located 3.5 degrees northeast of Procyon. There is one meteor shower associated with this constellation, which are the Canis-Minorids. Finding Canis Minor: Though it is relatively faint, Canis Minor and its stars can be viewed using binoculars. Start with the brightest, Procyon – aka. Alpha Canis Minoris (Alpha CMi). If you’re unsure of which bright star is, you’ll find it in the center of the diamond shape grouping in the southwest area. Known to the ancients as Procyon – “The Little Dog Star” – it’s the seventh brightest star in the night sky and the 13th nearest to our solar system. For over 100 years, astronomers have known this brilliant star had a companion. Being 15,000 times fainter than the parent star, Procyon B is an example of a white dwarf whose diameter is only about twice that of Earth. But its density exceeds two tons per cubic inch! (Or, a third of a metric ton per cubic centimeter). While only very large telescopes can resolve this second closest of the white dwarf stars, even the moonlight can’t dim its beauty. The Winter Triangle. Credit: constellation-guide.com/Stellarium software Now hop over to Beta CMi. Known by the very strange name of Gomeisa (“bleary-eyed woman”), it refers to the weeping sister left behind when Sirius and Canopus ran to the south to save their lives. Located about 170 light years away from our Solar System, Beta is a blue-white class B main sequence dwarf star with around 3 times the mass of our Sun and a stellar luminosity over 250 times that of Sol. Gomeisa is a fast rotator, spinning at its equator with a speed of at least 250 kilometers per second (125 times our  Sun’s rotation speed) giving the star a rotation period of about a day. Sunspots would appear to move very quickly there! According to Jim Kaler, Professor Emeritus of Astronomy at the University of Illinois: “Since we may be looking more at the star’s pole than at its equator, it may be spinning much faster, and indeed is rotating so quickly that it is surrounded by a disk of matter that emits radiation, rendering Gomeisa a “B-emission” star rather like Gamma Cassiopeiae and Alcyone. Like these two, Gomeisa is distinguished by having the size of its disk directly measured, the disk’s diameter almost four times larger than the star. Like quite a number of hot stars (including Adhara, Nunki, and many others), Gomeisa is also surrounded by a thin cloud of dusty interstellar gas that it helps to heat.” Now hop over to Gamma Canis Minoris, an orange K-type giant with an apparent magnitude of +4.33. It is a spectroscopic binary, has an unresolved companion which has an orbital period of 389 days, and is approximately 398 light years from Earth. And next is Epsilon Canis Minoris, a yellow G-type bright giant (apparent magnitude of +4.99) which is approximately 990 light years from Earth. The location of Canis Minor in the northern hemisphere. Credit: IAU/Sky&Telescope magazine For smaller telescopes, the double star Struve 1149 is a lovely sight, consisting of a yellow primary star and a faintly blue companion. For larger telescopes and GoTo telescopes, try NGC 2485 (RA 07 56.7 Dec +07 29), a magnitude 13 spiral galaxy that has a small, round glow, sharp edges and a very bright, stellar nucleus. If you want one that’s even more challenging, try NGC 2508 (RA 08 02 0 Dec +08 34). Canis Minor lies in the second quadrant of the northern hemisphere (NQ2) and can be seen at latitudes between +90° and -75°. The neighboring constellations are Cancer, Gemini, Hydra, and Monoceros, and it is best visible during the month of March. We have written many interesting articles about the constellation here at Universe Today. Here is What Are The Constellations?, What Is The Zodiac?, and Zodiac Signs And Their Dates. Be sure to check out The Messier Catalog while you’re at it! For more information, check out the IAUs list of Constellations, and the Students for the Exploration and Development of Space page on Canes Venatici and Constellation Families. Sources: Wikipedia – Canis Minor IAU – Canis Minor Constellation Guide – Canis Minor Ian Ridpath’s Star Tales – Canis Minor The post The Canis Minor Constellation appeared first on Universe Today.23 Dec
Weekly Space Hangout – December 23, 2016: Mathew Anderson’s “Our Cosmic Story” - Host: Fraser Cain (@fcain) Special Guest: Mathew Anderson is the author of “”Our Cosmic Story”” available on Amazon in January, 2017. He wrote “”Our Cosmic Story”” in interest from his years studying science giants like Brian Greene, Neil deGrasse Tyson, Richard Dawkins, and from past figures like Carl Sagan. This book is a big picture view of our world, its diverse life and civilizations,  and the chance for life and civilizations elsewhere in the cosmos. As a special treat, for a limited time, our listeners will have the opportunity to receive an advance electronic copy of Mathew’s books. Join us today to learn how to get your copy! Guests: Morgan Rehnberg (MorganRehnberg.com / @MorganRehnberg) Their stories this week:James Webb experiences a test anomalyFalse alarm on brightest ever supernovaWhere will NASA’s next midsize mission go? We use a tool called Trello to submit and vote on stories we would like to see covered each week, and then Fraser will be selecting the stories from there. Here is the link to the Trello WSH page (http://bit.ly/WSHVote), which you can see without logging in. If you’d like to vote, just create a login and help us decide what to cover! If you would like to join the Weekly Space Hangout Crew, visit their site here and sign up. They’re a great team who can help you join our online discussions! If you would like to sign up for the AstronomyCast Solar Eclipse Escape, where you can meet Fraser and Pamela, plus WSH Crew and other fans, visit our site linked above and sign up! We record the Weekly Space Hangout every Friday at 12:00 pm Pacific / 3:00 pm Eastern. You can watch us live on Universe Today, or the Universe Today YouTube page The post Weekly Space Hangout – December 23, 2016: Mathew Anderson’s “Our Cosmic Story” appeared first on Universe Today.23 Dec
How Do We Colonize Saturn’s Moons? - Welcome back to our series on Colonizing the Solar System! Today, we take a look at the largest of Saturn’s Moons – Titan, Rhea, Iapetus, Dione, Tethys, Enceladus, and Mimas. From the 17th century onward, astronomers made some profound discoveries  around the planet Saturn, which they believed was the most distant planet of the Solar System at the time. Christiaan Huygens and Giovanni Domenico Cassini were the first, spotting the largest moons of Saturn – Titan, Tethys, Dione, Rhea and Iapetus. More discoveries followed; and today, what we recognized as the Saturn system includes 62 confirmed satellites. What we know of this system has grown considerably in recent decades, thanks to missions like Voyager and Cassini. And with this knowledge has come multiple proposals that claim how Saturn’s moons should someday be colonized. In addition to boasting the only body other than Earth to have a dense atmosphere, there are also abundant resources in this system that could be harnessed. Much like the idea of colonizing the Moon, Mars, the moons of Jupiter, and other bodies in the Solar System, the idea of establishing colonies on Saturn’s moons has been explored extensively in science fiction. At the same time, scientific proposals have been made that emphasize how colonies would benefit humanity, allowing us to mount missions deeper into space and ushering in an age of abundance! A montage of images from Cassini of various moons and the rings around Saturn. Credit: NASA/JPL-Caltech/Space Science Institute Examples in Fiction: The colonization of Saturn has been a recurring theme in science fiction over the decades. For example, in Arthur C. Clarke’s 1976 novel Imperial Earth, Titan is home to a human colony of 250,000 people. The colony plays a vital role in commerce, where hydrogen is taken from the atmosphere of Saturn and used as fuel for interplanetary travel. In Piers Anthony’s Bio of a Space Tyrant series (1983-2001), Saturn’s moons have been colonized by various nations in a post-diaspora era. In this story, Titan has been colonized by the Japanese, whereas Saturn has been colonized by the Russians, Chinese, and other former Asian nations. In the novel Titan (1997) by Stephen Baxter, the plot centers on a NASA mission to Titan which must struggle to survive after crash landing on the surface. In the first few chapters of Stanislaw Lem’s Fiasco (1986), a character ends up frozen on the surface of Titan, where they are stuck for several hundred years. In Kim Stanley Robinson’s Mars Trilogy (1996), nitrogen from Titan is used in the terraforming of Mars. In his novel 2312 (2012), humanity has colonized several of Saturn’s moons, which includes Titan and Iapetus. Several references are made to the “Enceladian biota” in the story as well, which are microscopic alien organisms that some humans ingest because of their assumed medicinal value. The moons of Saturn, from left to right: Mimas, Enceladus, Tethys, Dione, Rhea; Titan in the background; Iapetus (top) and irregularly shaped Hyperion (bottom). Credit: NASA/JPL/Space Science Institute As part of his Grand Tour Series, Ben Bova’s novels Saturn (2003) and Titan (2006) address the colonization of the Cronian system. In these stories, Titan is being explored by an artificially intelligent rover which mysteriously begins malfunctioning, while a mobile human Space Colony explores the Rings and other moons. Proposed Methods: In his book Entering Space: Creating a Spacefaring Civilization (1999), Robert Zubrin advocated colonizing the outer Solar System, a plan which included mining the atmospheres of the outer planets and establishing colonies on their moons. In addition to Uranus and Neptune, Saturn was designated as one of the largest sources of deuterium and helium-3, which could drive the pending fusion economy. He further identified Saturn as being the most important and most valuable of the three, because of its relative proximity, low radiation, and excellent system of moons. Zubrin claimed that Titan is a prime candidate for colonization because it is the only moon in the Solar System to have a dense atmosphere and is rich in carbon-bearing compounds. On March 9th, 2006, NASA’s Cassini space probe found possible evidence of liquid water on Enceladus, which was confirmed by NASA in 2014.  According to data derived from the probe, this water emerges from jets around Enceladus’ southern pole, and is no more than tens of meters below the surface in certain locations. This would would make collecting water considerably easier than on a moon like Europa, where the ice sheet is several km thick. Data obtained by Cassini also pointed towards the presence of volatile and organic molecules. And Enceladus also has a higher density than many of Saturn’s moons, which indicates that it has a larger average silicate core. All of these resources would prove very useful for the sake of constructing a colony and providing basic operations. In October of 2012, Elon Musk unveiled his concept for an Mars Colonial Transporter (MCT), which was central to his long-term goal of colonizing Mars. At the time, Musk stated that the first unmanned flight of the Mars transport spacecraft would take place in 2022, followed by the first manned MCT mission departing in 2024. In September 2016, during the 2016 International Astronautical Congress, Musk revealed further details of his plan, which included the design for an Interplanetary Transport System (ITS) and estimated costs. This system, which was originally intended to transport settlers to Mars, had evolved in its role to transport human beings to more distant locations in the Solar System – which could include the Jovian and Cronian moons. Artist’s rendering of possible hydrothermal activity that may be taking place on and under the seafloor of Enceladus. Credit: NASA/JPL Potential Benefits: Compared to other locations in the Solar System – like the Jovian system – Saturn’s largest moons are exposed to considerably less radiation. For instance, Jupiter’s moons of Io, Ganymede and Europa are all subject to intense radiation from Jupiter’s magnetic field – ranging from 3600 to 8 rems day. This amount of exposure would be fatal (or at least very hazardous) to human beings, requiring that significant countermeasures be in place. In contrast, Saturn’s radiation belts are significantly weaker than Jupiters – with an equatorial field strength of 0.2 gauss (20 microtesla) compared to Jupiter’s 4.28 gauss (428 microtesla). This field extends from about 139,000 km from Saturn’s center out to a distance of about 362,000 km – compared to Jupiter’s, which extends to a distance of about 3 million km. Of Saturn’s largest moons, Mimas and Enceladus fall within this belt, while Dione, Rhea, Titan, and Iapetus all have orbits that place them from just outside of Saturn’s radiation belts to well beyond it. Titan, for example, orbits Saturn at an average distance (semi-major axis) of 1,221,870 km, putting it safely beyond the reach of the gas giants energetic particles. And its thick atmosphere may be enough to shield residents from cosmic rays. In addition, frozen volatiles and methane harvested from Saturn’s moons could be used for the sake of terraforming other locations in the Solar System. In the case of Mars, nitrogen, ammonia and methane have been suggested as a means of thickening the atmosphere and triggering a greenhouse effect to warm the planet. This would cause water ice and frozen CO² at the poles to sublimate – creating a self-sustaining process of ecological change. Colonies on Saturn’s moons could also serve as bases for harvesting deuterium and helium-3 from Saturn’s atmosphere. The abundant sources of water ice on these moons could also be used to make rocket fuel, thus serving as stopover and refueling points. In this way, a colonizing the Saturn system could fuel Earth’s economy, and the facilitate exploration deeper into the outer Solar System. Challenges: Naturally, there are numerous challenges to colonizing Saturn’s moons. These include the distance involved, the necessary resources and infrastructure, and the natural hazards colonies on these moons would have to deal with. For starters, while Saturn may be abundant in resources and closer to Earth than either Uranus or Neptune, it is still very far. On average, Saturn is approximately 1,429 billion km away from Earth; or ~8.5 AU, the equivalent of eight and a half times the average distance between the Earth and the Sun. To put that in perspective, it took the Voyager 1 probe roughly thirty-eight months to reach the Saturn system from Earth. For crewed spacecraft, carrying colonists and all the equipment needed to colonize the surface, it would take considerably longer to get there. These ships, in order to avoid being overly large and expensive, would need to rely on cryogenics or hibernation-related technology in order to save room on storage and accomodations. While this sort of technology is being investigated for crewed missions to Mars, it is still very much in the research and development phase. Artist’s concept of a Bimodal Nuclear Thermal Rocket in Low Earth Orbit. Credit: NASA Any vessels involved in the colonization efforts, or used to ship resources to and from the Cronian system, would also need to have advanced propulsion systems to ensure that they could make the trips in a realistic amount of time. Given the distances involved, this would likely require rockets that used nuclear-thermal propulsion, or something even more advanced (like anti-matter rockets). And while the former is technically feasible, no such propulsion systems have been built just yet. Anything more advanced would require many more years of research and development, and a major commitment in resources. All of this, in turn, raises the crucial issue of infrastructure. Basically, any fleet operating between Earth and Saturn would require a network of bases between here and there to keep them supplied and fueled. So really, any plans to colonize Saturn’s moons would have to wait upon the creation of permanent bases on the Moon, Mars, the Asteroid Belt, and most likely the Jovian moons. This process would be punitively expensive by current standards and (again) would require a fleet of ships with advanced drive systems. And while radiation is not a major threat in the Cronian system (unlike around Jupiter), the moons have been subject to a great deal of impacts over the course of their history. As a result, any settlements built on the surface would likely need additional protection in orbit, like a string of defensive satellites that could redirect comets and asteroids before they reached orbit. The huge storm churning through the atmosphere in Saturn’s northern hemisphere overtakes itself as it encircles the planet in this true-color view from NASA’s Cassini spacecraft. Image credit: NASA/JPL-Caltech/SSI Given its abundant resources, and the opportunities it would present for exploring deeper into the Solar System (and maybe even beyond), Saturn and its system of moons is nothing short of a major prize. On top of that, the prospect of colonizing there is a lot more appealing than other locations that come with greater hazards (i.e. Jupiter’s moons). However, such an effort would be daunting and would require a massive multi-generational commitment. And any such effort would most likely have to wait upon the construction of colonies and/or bases in locations closer to Earth first – such as on the Moon, Mars, the Asteroid Belt, and around Jupiter. But we can certainly hold out hope for the long run, can’t we? We have written many interesting articles on colonization here at Universe Today. Here’s Why Colonize the Moon First?, How Do We Colonize Mercury?, How Do We Colonize Venus?, Colonizing Venus with Floating Cities, Will We Ever Colonize Mars?, How Do We Colonize Jupiter’s Moons?, and The Definitive Guide to Terraforming. Astronomy Cast also has many interesting episodes on the subject. Check out Episode 59: Saturn, Episode 61: Saturn’s Moons, Episode 95: Humans to Mars, Part 2 – Colonists, Episode 115: The Moon, Part 3 – Return to the Moon, and Episode 381: Hollowing Asteroids in Science Fiction. Sources: NASA: Solar System Exploration – Saturn’s Moons NASA – Cassini: Mission to Saturn – Moons Wikipedia – Moons of Saturn Wikipedia – Colonization of the Outer Solar System The post How Do We Colonize Saturn’s Moons? appeared first on Universe Today.22 Dec
Vaccines and Relatives – Don’t Engage - The holidays are upon us and for the next few weeks, everyone will be focused on shopping, wrapping, family and friends. We have just been through a brutal political election "season," filled with strong opinions and hot emotions. Heading off to visit relatives can lead to another equally hot and divisive topic: vaccination. If you have a baby or a toddler, the vaccine topic can bring out the best - and the worst - discussions. If you are with friends and family that share your views about the problems caused by vaccines, you can chat for hours, You can angrily share stories of your friend's vaccine injured child or grandchild. You can spout off information about how vaccines do not protect you from getting sick - citing the mumps outbreaks, pertussis outbreaks and more. You can rattle off the harmful vaccine ingredients - aluminum, mercury, polysorbate 80, animal DNA, aborted fetal cells, and more. You can roll your eyes with like-mined adult friends about the horrors of so many vaccines - 46 doses of 16 different vaccines now part of the pediatric schedule. You can worry together about the looming adult vaccine mandates. You can share your opinions of VAXXED, the Movie and the heartbreaking video testimonials the VAXXED bus has collected across the country. You can talk about the Scream of the Week and the more than 7,000 articles you regularly read and share from the VaccineResearchLibrary.com. You can talk confidently about how egregious the vaccine industry really is. But what if your family and friends are not on "your side." What if they are totally pro-vaccine...and they interrogate you about your decisions to not vaccinate? What if they challenge the choices you have made to keep your child healthy - without vaccines. What should you do? First, ask yourself this question: Why do you feel compelled to engage? Is it really necessary to defend your decision? The choice is only be between you and your spouse, and perhaps your doctor and your Heavenly Maker. Your decisions regarding your child's prevention program - taking vitamins, using homeopathy, avoiding sugar, getting adequate sleep, appropriate hand washing - is actually no one's business. Unless you really want a fight, don't engage. You can say, "S/he has all the vaccines s/he needs!" Make light of the question and move on. Change the subject. You're unvaccinated child is more likely very bright, healthy and happy. Move the focus away from shots to the presents under the tree. Know that you've decided what is in the best interest of your child. If you firmly believe zero vaccines are necessary, you have told the truth! There's no reason to get your stomach twisted into a knot and worry if you have all the pertinent facts to answer every interrogation. If you've done your homework and you feel confident about your decision, that's all you need. Don't try to convince anyone about your choices. Be strong in your commitments. YOU are responsible for that little person and your decisions are key to their future. Don't allow yourself to be bullied. Even by your well-meaning sister or mother-in-law.  13 Dec
Scream #182: Releasing Live Vaccines To Stop Disease - October 26, 2016 - Eradicating infectious disease using weakly transmissible vaccines (full text) "…genetic engineering brings to life the possibility of a live, transmissible vaccine. Unfortunately, releasing a highly transmissible vaccine poses substantial evolutionary risks, including reversion to high virulence as has been documented for the oral polio vaccine….Rather than directly vaccinating every individual within a population, a transmissible vaccine would allow large swaths of the population to be vaccinated effortlessly by releasing an infectious agent genetically engineered to be benign yet infectious….Given the current pace of technological advance in genetic engineering, it is only a matter of time before transmissible vaccines can be easily developed for a wide range of infectious diseases.” COMMENT: This is stranger than science fiction and, because it is real, much more dangerous. So, based on mathematical models, they plan to create MORE live, transmissable vaccines? And of course, nothing could go possibly go wrong with that global experiment! For example, this study admits that about one in every 750,000 children receiving the first dose of oral polio vaccine (OPV) experiences vaccine-associated paralytic poliomyelitis “attributable to reversion of one of the three strains back to an active, virulent form.” What if this were the case with experimental mutants?  A study published in 1986 clearly explains the deadly potential of recombinants:  In this study, 100 particles of benign herpes virus A were injected without consequence into Mouse Group A. Then, 100 particles of benign herpes virus B were injected without consequence into Mouse Group B. But when ONE particle of virus A plus ONE particle of virus B were injected into Mice C, 62% of the mice in Group C died. Eleven new recombinant viruses were isolated; three of the recombinant viruses were lethal to the next generation.  The authors concluded: “Two avirulent viruses may interact in vivo to produce virulent recombinants that can be lethal.”  The results of random genetic mutation are totally unpredictable. No contrived computer model could possibly predict now dangerous any mutation or recombinant can be. To genetically modify viruses and release them into the wild would be a massive global experiment. Pure insanity. 1 star scream — sigh/eyes rolling 2 star scream — aggravating 3 star scream — gut wrenching and sad 4 star scream — unbelievable — what's next? 5 star scream — I'm outraged! Take action! The 5-star screams will be posted far and wide, sent out to radio, print, journalists and television outlets. You will get to help us determine what information needs to be broadcast to the world.    23 Nov
Scream #181: An inflammatory response is essential for the development of adaptive immunity-immunogenicity and immunotoxicity - November 11, 2016 - An inflammatory response is essential for the development of adaptive immunity-immunogenicity and immunotoxicity "Alum-adjuvanted vaccines induce local inflammatory nodules at injection sites, and the systemic and local production of the inflammatory cytokines, IL-1β, IL-6, and TNF-α, has been reported to occur three hours after vaccinations."..."Approximately 10–15% of recipients of simultaneous immunizations with multiple vaccines develop febrile reactions, and have higher G-CSF levels.” COMMENT: Aluminum is a known neurotoxic; there is no safe limit for injectable aluminum. More than 2,000 references have been published in the National Library of Medicine on adverse effects physiological effects of aluminum.  Aluminum compounds have been used as a vaccine adjuvant since 1926. Almost as soon as the metal was added, (1927) toxicologist Dr. Victor Vaughn reported, “All salts of aluminum are poisonous when injected subcutaneously or intravenously.” Research clearing documents that aluminum adjuvants can induce serious immunological disorders including autoimmunity, brain inflammation, and widespread health consequences. Because aluminum has been found in plaques and neurofibrillary tangles within the brain, it is thought that aluminum plays a role in senile dementia and Alzheimer's disease. Other published adverse effects of excessive aluminum include bone disorders and anemia. NOTE: The FDA Regulations for vaccines limits the amount of aluminum in the recommended individual dose of biological products, including vaccines, to not more than 0.85-1.25 mg http://www.fda.gov/BiologicsBloodVaccines/ScienceResearch/ucm284520.htm Vaccines that contain aluminum:  DTaP, dT, HiB, Prevnar 7, Prevnar 13, Hepatitis B, Hepatitis A, Gardasil, and Anthrax.  If a 1-year-old receives all recommended vaccines, he will receive between 1.6 and 4.1 mg of aluminum  If a 5 yr-old receives all recommended vaccines for age, he will receive an additional 1.9 to 4.9 mg of aluminum Reference: Baylor, Norman W., Egan, William and Richman, Paul. “Aluminum salts in vaccines—US perspective.” Vaccine 20 (2002) S18–S23. This article documents that the aluminum within vaccines increases inflammatory cytokines within three hours of receiving a vaccine. We have been injecting this poisonous light metal into children for 90 years.  How much more evidence do we need to Just Say No and end this systematic poisoning of the human race? 1 star scream — sigh/eyes rolling 2 star scream — aggravating 3 star scream — gut wrenching and sad 4 star scream — unbelievable — what's next? 5 star scream — I'm outraged! Take action! The 5-star screams will be posted far and wide, sent out to radio, print, journalists and television outlets. You will get to help us determine what information needs to be broadcast to the world.    16 Nov
Making More of What We Don’t Need - A recent article published in VACCINE described the global production of seasonal and pandemic influenza vaccines. Reading this article, I found these two items to be astonishing: Worldwide, there are 44 manufacturers of influenza vaccines. (Do we need this many companies producing flu shots?) The global production capacity for seasonal and pandemic vaccines in 2006 was estimated to be 500 million and 1.5 billion doses respectively. Since 2006, the global capacity has increased to 1.5 billion doses of seasonal influenza and 6.2 billion doses of pandemic influenza vaccines. (Do we really need enough flu shots to vaccinate nearly every person in the world?) The article describes a ten year strategy to “significantly strengthen” the production of flu shots and the development of a strategy to “address global scarcities and inequitable access to influenza vaccines.” The authors advocate for “enough vaccine to be available world wide to vaccinate 70% of the global population.” In the U.S., the media has consistently reported an estimated 36,000 deaths per year from influenza as though this was a solid fact. However, the number came from a 2003 study published in JAMA that used a statistical model to estimate the number of deaths over 9 flu seasons, from 1990 through 1999. The CDC actually admits it “doesn’t know” exactly how many people die from seasonal flu each year because: 1) states are not required to report the number of flu cases or flu deaths for those over 18 years of age; 2) most who die from the flu actually die from flu-related complications, such as pneumonia; and 3) most people who are said to die from seasonal flu-related complications are not tested specifically for influenza viruses. The mathematical model used to project the number of influenza deaths globally is equally inaccurate. For example, the WHO estimates that each year, upwards of 500 thousand deaths occur from influenza infections. Since influenza is not a reportable infection even in First world countries, how can the number of deaths from flu be accurately assessed in countries with marginal reporting systems? Testing the influenza vaccine vials for mercury, stray viruses and nanotechnology might be a very revealing. In the mean time, it is more efficacious to keep your Vitamin D level about 80ng/dL and wash your hands to prevent the flu than to be injected with a solution that can cause long lasting health harm.  14 Nov
Scream #180 – Microcephaly and Zika – not connected - November 1, 2016 - Tdap Vaccination During Pregnancy and Microcephaly and Other Structural Birth Defects in Offspring "Cases of microephaly in Brazil increased substantially during 2015, likely associated with maternal Zika virus infections. However, these cases overlapped with the November 2014 initiation of Brazil's maternal Tdap program." Comment: Well, isn’t this interesting. This information completely supports the articles I wrote in February, 2016, about the scam of Zika, and about the hazards of vaccinating pregnant women here, where I said: Until now, Zika hasn’t had any interest for vaccine developers. The infection is mild, consisting of a transient fever and a rash. More than 80% of those affected have no symptoms at all. Now, with absolutely no proof that Zika causes birth defects or that it is associated with microcephaly, the Big Boys in the vaccine industry are chomping at the bit to get in the game. Why? As the Wall Street Journal reported, “Zika’s rapid spread could provide biopharma companies with a blockbuster opportunity.” It appears they overlooked the fact that among the 404 confirmed cases of microcephaly, only 17 have an association with the Zika virus. and here, where I said this: Despite evidence of human infectivity, there is no evidence proving the Zika virus causes birth defects. The Washington Post reported (Feb 6, 2016) that even though 3,177 pregnant women in Columbia have been diagnosed with the virus, there has been no increase incidence of microcephaly. However, the Harvard-trained, Brazilian researcher Pliny Bezerra dos Santos Filho, Ph.D. has put together a different scenario: Are the shrunken brains in more than 4,000 children the result of vaccinating pregnant women?” Jon Rapport wrote an entire series on why Zika is not causally related to microcephaly here. It is perplexing why a side effect that appears within hours of a  vaccine is dismissed as “temporal association does not prove causality” but with Zika, a virus found in the presence of microcephaly absolutely proves causality. Such a double standard. 1 star scream — sigh/eyes rolling 2 star scream — aggravating 3 star scream — gut wrenching and sad 4 star scream — unbelievable — what's next? 5 star scream — I'm outraged! Take action! The 5-star screams will be posted far and wide, sent out to radio, print, journalists and television outlets. You will get to help us determine what information needs to be broadcast to the world.     9 Nov
Do Your Homework: My Puppy’s Experience - On October 27, I drove to Bay City, Michigan to pick up my new 4 lb puppy, a Japanese chin named Teegan. Born on August 13, he was just under 12 weeks old. From the beginning, he has been a well-behaved, happy, playful pup, sleeping the night through from day one. Within one week, he knows his name and comes scampering when called. He was learning 'sit', 'down' and 'stay' pretty consistently...and pretty consistently doing his business outside! Yesterday (11/4) was Teegan's first vet appointment. I took him in to just get a physical exam and to get a stool test. And, since I have never owned a dog, I had a lot of questions about nail trimming, ear cleaning, neutering, etc. He was diagnosed as having fleas and he was suspected of having puppy parasites. We talked at length about NO MORE vaccines - he had (unfortunately) been given three shots before I picked him from the breeder. She agreed, but told me all the reasons why they should be given. Then we talked about heart worms and the parasites. It seemed reasonable to give Teegan heart worm medication. I asked if he was the "right age" - and was told yes. So, he was given a dose of a topical medication for the fleas, the parasites and heart worm protection. He seemed a little "weird" when we left, but I thought he was just worn out. He'd had a Big Day: The vet office was filled with unfamiliar people who wanted to coo over him. It was noisy with barking dogs and as we were leaving, a large, squealing pet pig had about scared him to death! When we got home, he wouldn't eat and he slept in his create from 5 pm until I woke him up to 'go potty' around 11 pm. When I picked him up, he acted like his skin hurt. He squirmed and didn't want to be touched - really odd. Then he scratched and scratched and scratched. He couldn’t walk straight...rather staggered into the kitchen, falling down along the way. He was batting at things in the air, almost like he was hallucinating. He wouldn't even eat his "good boy!" treat. Holding him, he was shaking, choking and almost seemed to be having myoclonic jerks of his muscles. I thought that perhaps he was having a “die off” reaction and would be better in the morning. But he wasn’t. After sleeping all night, this morning the symptoms were the same...even somewhat worse. I looked up the medication he was given called REVOLUTION, which is the drug selamectin. Then I looked up the side effects. He had almost every one of them.... I was getting more and more horrified. Then I found site after site - like this one - that had many dozens of reports of short and long-term side effects.  I had been told the medication was "safe, no side effects and absolutely necessary."  Now I was somewhere between sick to my stomach and angry, with a big dose of guilt mixed in. I called the vet office. The Office manager said the only side effects were skin irritation, so if I wanted to bring him back in, they would try to determine what was wrong with him. I'm a little embarrassed to admit she is now missing a sizable chunk of her behind. I thought, "I have to do something to help him...if this were a human baby, what would I do? LIVER support ...to detox the medication." So, I did a little research to see if pet products contained milk thistle and NAC, to check if it would be safe for dogs. And yes, there were quite a few.  So, I opened a capsule of Milk Thistle and NAC (500mg each) and mixed in about 1/2 cup water. Guessing on the dose, I gave him 2 cc with a syringe. He didn’t like it very much and I thought he was going to throw up. But after a few minutes, he settled down and fell asleep. I put him back in his crate...and about an hour later. He started to stir. Within about two hours, he was back to his playful, rambunctious self. THANK GOD. Literally.  When the vet called me back several hours later, I was told, “I’ve never seen this reaction before....it just doesn’t ever happen. It's really rare, and of course, it would happen to your dog.” She said she would call the manufacturer to report the "rare side effect." I asked if there was anything she could do for him, and she said, "Just give him a bath and wash off the rest of the medication." I asked if she has read all the reports of serious reactions. She said, “You can find that on any medication. Can I show you sites about how many dogs die of flea anemia and heart worms?” Wow. Just wow. Now I know how parents feel when they call their pediatrician to report a reaction and get blown off. I know how helpless and angry  - and guilty - they feel. But the key is: Don't put up with the excuses or lame explanations. And take action IMMEDIATELY to eliminate the toxicities. Milk thistle and NAC (N-Acetyl Cystiene) are very important for liver support and work synergistically together than the do alone.  I'm sharing this to enforce to everyone even more: DO YOUR HOMEWORK. Research every vaccine and every medication before going to the pediatrician's office - or before taking our beloved pet to the vet. It never occurred to me that heart worm medication would be the cause of my little guy to quickly become so sick. I'm sure many parents think the same about vaccines. I'll know better the next time...and I'll follow my own advice.    7 Nov
Scream #179: Facebook vaccine post by Mark Zuckerberg - September 24, 2016 - A comparison of language use in pro- and anti-vaccination comments in response to a high profile Facebook post. (full text) In January 2016, Mark Zuckerberg (co-founder of Facebook) posted a photo of himself holding his baby daughter, captioned “Doctor’s visit – time for vaccines!” As of May 2016, the post had received approximately 3.4 million ‘likes,’ and 84,000 comments. Commenters addressed the risks of  vaccination and vaccine refusal, resulting in a discussion between individuals unlikely to engage with one another under different circumstances and providing a unique opportunity to compare the emotional and cognitive components of broadly pro- and anti-vaccination comments using linguistic analysis. Conclusion Although the anti-vaccination stance is not scientifically-based, comments showed evidence of greater analytical thinking, and more references to health and the body. In contrast, pro-vaccination comments demonstrated greater comparative anxiety, with a particular focus on family and social processes. These results may be indicative of the relative salience of these issues and emotions in differing understandings of the benefits and risks of vaccination. Text-based analysis is a potentially useful and ecologically valid tool for assessing perceptions of health issues, and may provide unique information about particular concerns or arguments expressed on social media that could inform future interventions. Comment: Facebook is a communication tool. Its a global place were we share informational and educational articles, personal information, pictures. We even build businesses around Facebook as a marketing tool. This study show that Facebook is also a research tool for the vaccine industry. There have been many publications in the mainstream journals over the last few years about “vaccine hesitancy.” They just can’t understand why anyone would want to refuse a vaccine! They analyze it, try to explain it and even offer advice to pediatricians on how best to coerce parents into vaccinating their children. The problem here isn’t so much with the author’s analysis of the pro/con Facebook comments. It’s that the intelligent comments of those who refuse will be twisted to their liking. And it never ceases to amaze me, with the thousands of articles and the tens of thousands of testimonials about the harm caused by vaccination, that it is all dismissed by saying, “the anti-vaccination stance is not scientifically based.” It’s time to turn the tables on THEM; Where is YOUR science that proves the Vaccine Schedule is safe and scientifically based? (Answer: there is none.) 1 star scream — sigh/eyes rolling 2 star scream — aggravating 3 star scream — gut wrenching and sad 4 star scream — unbelievable — what's next? 5 star scream — I'm outraged! Take action! The 5-star screams will be posted far and wide, sent out to radio, print, journalists and television outlets. You will get to help us determine what information needs to be broadcast to the world.   2 Nov
Our Story - Research Associate Vaccine Research Library   My son is 30 years old now, so I have been at this a “very” long time. He had an encephalitic reaction to his 4 month DTP vaccine, (whole cell). He screamed continuously for 6 hours until his eyes rolled back in his head and passed out. The pediatrician told me this was normal. A friend of mine whose baby was 2 weeks younger than my son who had the vaccine died the same night. Here is her story Death by Lethal Injection. He was hospitalized after the MMR vaccine with severe gastroenteritis at 19 months old, had a colonoscopy without any anesthesia. His bowels were so inflamed, there was one clear spot the size of a quarter. The gastroenterologist asked me if he recently had a virus? I said no, didn’t think of the MMR he just had a few weeks prior! He was diagnosed failure to thrive, wore the same size clothes for about three years. He had scoliosis, strabismus, migraines, and severe allergies. We took him to see Dr. Doris Rapp, (The Impossible Child) Dr. Rapp taped his reaction to yeast in her office, and again after treatment. This treatment helped him immensely, it was reminiscent of the first DAN doctors. He still has migranes, but, is out there in the working force and has a beautiful daughter. He has missed only a handful of days from work after being there for 6 years working 6 days a week. There is a genetic susceptibility, I almost died as an infant at 9 days old, I was baptized, and given my last rites in the hospital on the same day. All the doctors could come up with was a small throat opening, and a milk allergy. In the early ’60’s I received the killed version of the measles vaccine, I thought I would die. I acquired atypical measles from the shot. Also reacted severely to the ’76 swine flu vaccine, 15 minutes after receiving it. I learned later that there were other family members that also had reactions. At that time there were only a few vaccines on the schedule. They were given fractional doses of the DTP vaccine. There are way too many vaccines on the schedule for this to be done today. My other son, now 17 years old had a reaction to the DTaP vaccine at 6 months, couldn’t wake him due to a hyporesponsive adverse event. His last vaccine was at 8 months, he had the Prevnar vaccine which was just approved the month before. I walked into the office very happy that he wasn’t receiving a vaccine that day. The pediatrician talked me into it. Here is the report that VAERS removed. Thanks to the NVIC for collecting many removed records. I of course left that pediatrician, and found another Dr. who stated that the Prevnar vaccine was a serious reaction. Thanks to all the persistent warrior moms and dads, Sandy Gottstein Vaccination Newsmy lifeline. Dr. Sherri Tenpenny for all her guidance, and especially my dear friend Rita Hoffman Vaccine Choice Canada for all of the help she has given me through the years. 29 Oct
Washington Scientists Forge Ahead Amid Uncertainty - The world is turned on its head, but there are still salmon to monitor as they navigate dams, crops to improve before climate change sets in, and energy grids to protect from cyber attackers. On Thursday evening, as protestors marched through downtown and helicopters hovered in the sky, scientists from Washington’s premiere public research institutions showed off their latest works to the Seattle innovation community. In the still-shocking aftermath of the election, the SciTech Northwest event—hosted by the Technology Alliance and participating institutions—provided a welcome distraction in the form of substance, human ingenuity, and forward-looking progress. Here were a handful of outputs from more than $2 billion in annual public research investment—the majority of it from federal sources—at University of Washington, Washington State University, and Pacific Northwest National Laboratory. These institutions, which recently agreed to work together more closely on research and education, are major underpinnings of the Pacific Northwest’s thriving innovation economy. But hanging over the elegant room on the top floor of the Edgewater Hotel was the same fog of uncertainty settling in to nearly every aspect of life in America. What will become of the nation’s scientific enterprise in the time of President Donald J. Trump? For the most part, the scientists were more interested in talking about their work than this uncertain future. The conventional wisdom is that Trump and a Republican-controlled Congress will seek to shrink the federal government, possibly including federal research investment. But who really knows? In the online Presidential Science Debate 2016—one of the scant venues to address these issues—Trump expressed a measure of support: “Though there are increasing demands to curtail spending and to balance the federal budget, we must make the commitment to invest in science, engineering, healthcare and other areas that will make the lives of Americans better, safer and more prosperous.” Here’s a closer look at three projects happening in Washington state now: Smart Sensing in Phenomics. Farmers in Washington and around the world select new varieties to plant based on a huge range of factors from the juiciness of an apple to the drought tolerance of wheat. The process of developing new varieties—phenotyping—has long relied on the laborious and subjective visual observations of breeders, who may test thousands of new varieties and select just one or two for a certain desired trait. Sankaran, left. That’s starting to change. WSU assistant professor Sindhuja Sankaran is part of a team of biological systems engineers bringing modern sensing technologies and data analysis to the field of phenomics. They are experimenting with hyper-spectral sensors and thermal cameras that can see what the human eye cannot. The temperature of an experimental plant’s leaves, for example, can reveal transpiration rates, a clue to how efficiently it uses water. By mounting these sensors on drones, conventional small aircraft, and field platforms, researchers can gather data much more efficiently than a human walking through a greenhouse or a field. The challenge is finding the right combination of sensors and platforms to gather the data for each crop, and then analyzing it in a way to provide useful information to breeders, Sankaran says. It’s a new field, she says. Researchers in Europe and Australia have been at it for five or six years, but skepticism remains. Sankaran and her team recently received federal funding from the National Institute of Food and Agriculture to study the benefit of sensor use in breeding apples, camelina, quinoa, dried peas, lentils, and winter and spring wheat. The Ants Go Marching. Glenn Fink, a senior cyber security researcher at Pacific Northwest National Laboratory, in search of inspiration for a decentralized, adaptable way to spot constantly evolving attacks on broad IT infrastructure running systems like the power grid, found some in Proverbs 6:6: “Go to the ant, thou sluggard; learn her ways, and be wise,” he recites. “I just took it literally.” Ants are collectively quite smart and successful, outweighing the rest of Earth’s species. “Ants solve some very difficult, not just hard problems, but adversarial problems,” Fink says. Their response to attack, methods of communication, and distributed decision making provided useful models for devising a new approach to IT security that is fast, decentralized, resilient, and adaptable to changing methods of attack. Digital Ants. The system, Ant-Based Cyber Defense, essentially unleashes a colony of digital ants on an IT system. These tiny, automated, limited-function programs crawl from machine to machine, diagnosing things like CPU, memory, and network usage. When a CPU Digital Ant detects elevated activity at a machine it visits, it reports that to another lightweight program installed on each machine called a sentinel. The sentinel compares the elevated CPU report to other information it has received from other visiting ants. The sentinel may have an explanation. “Oh yeah, I’m a compute server, so I’m running high-end computational software,” Fink says. The ant program, in that case, moves on to inspect the next machine on the network. But if the sentinel has no explanation for the anomalous usage the ant detects, it “feeds” the ant—rewarding it for finding something. The ant moves on, and now at each subsequent machine it visits, it leaves a message for other ants crawling the system—akin to the pheromones actual ants use to communicate with each other—to the effect of “Back that way, I got fed,” Fink explains. “Other ants come across that digital pheromone field and say, ‘Oh, OK, I might get fed that way.’” So the Zombie Processes Ant, the Page Faults Ant, the Unauthorized Access Ant, and all the others follow the pheromone toward the anomalous machine to perform their own inspection. The anomalous machines—perhaps only a handful in a network of thousands or millions—are highlighted for human security to investigate further. “The idea here is we want to find artifacts of unusual behavior that may be malicious, that people can look into later,” Fink says, calling it a “general-purpose signature discovery framework.” The underlying idea is that even as attack vectors evolve, they will always reveal themselves to at least some extent through anomalous utilization of computing resources that are under attack. By automating the discovery of those anomalies—with the help of the Digital Ants—they can be contained more quickly. The Interdisciplinary Triumph of Sensor Fish. Tracking endangered fish as they traverse hydroelectric dams used to involve performing minor surgery to implant short-lived battery powered devices. Zhiqun Deng, chief scientist in PNNL’s Energy and Environment Directorate, demonstrated several new sensor and power technologies that promise less-invasive, longer-lasting fish-tracking devices, which are providing dam operators and fisheries managers with better information to improve survival rates. The latest iteration fish tracker is smaller than a finger nail, with a lithium ion battery that can last 100 days, sending a location signal every three seconds, Deng explains. It can be quickly injected into fish, reducing mortality associated with earlier tracking technologies. Next spring, researchers will use a new sensor with a strip of flexible electronic material that can capture kinetic energy created by the fish swimming, powering the device for years. A mechanical fish tail. “We are lucky because of the major advances in micro-electronics,” he says. “Even if we had the idea, we couldn’t do it a few years ago.” The improved location data these sensors capture can be compared to other data gathered by another novel device. Sensor Fish, a smolt-sized tube packed with sensors, boldly traverses dam spillways, powerhouses, and fish ladders to determine what exactly is killing the real fish. As it travels from a dam’s fore-bay, through the turbines, and out the tail race, Sensor Fish measures pressure, acceleration, and rotational velocity. “Once you understand that, then you can design your turbines accordingly,” he says, or adjust dam operations during times of fish migration to improve survival rates. “Some operations are more friendly than others.” The data are particularly important now as old dams are repowered. “Now is a good opportunity to replace them with a better design,” Deng says. The combination of disciplines brought to bear on this problem is impressive: low-power sensors, energy storage, micro-electronics, fisheries biology, dam operations, and the IT systems and data analytics work that bring it all together. “It’s a really big team effort from so many different disciplines that actually makes this possible,” he says. For a deeper look at inter-disciplinary innovation, attend Xconomy Intersect on Dec. 8 in Seattle, where entrepreneurs, researchers, and investors will explore the crossroads from which the next big idea may emerge. Reprints | Share:           UNDERWRITERS AND PARTNERS                                 10 Nov
Seattle Week in Review: Bottom of the Ninth, World’s Climate at Stake - For a moment this week—or for several hours, depending on how much of it you took in—America was transfixed by something historic, suspenseful, and with no modern precedent. Game 7 of the World Series, which delivered the pinnacle of drama in the sports world, was a needed break from the election, and perhaps a chance to temper our nerves for election night, when the stakes are as big as they get—unless you’re a Cubs fan. Let’s hope it doesn’t go to extra innings. The anxiety is real, so try to take it easy this weekend. Meanwhile, here are a few things that caught our attention in the last few days, including Amazon’s renewable energy long game, the international climate agreement, and Washington’s carbon tax vote; a chatbot for political questions; and a pledge among Washington’s research powerhouses to collaborate more closely. Details: —Despite its near absence from the scorched-earth presidential campaign, energy policy is likely to be fundamentally altered by the election. Implementation of the international climate agreement that takes effect today will be up to the signatory nations themselves. They will begin to sketch out ways to do so—and to monitor each other—at the next big U.N. climate conference, beginning Monday in Marrakech. Whether the U.S. continues to participate hinges on the election. The Washington Post’s breakdown of the implications is highly detailed. But the gist is this: Hillary Clinton represents a continuation or even a strengthening of the climate change policies of the Obama Administration. Donald Trump has pledged to effectively stick America’s fingers in its ears, close its eyes, and kick randomly at the efforts of the rest of the world like a toddler having a tantrum, ignorant that the house is on fire. One of the only official comments from China’s government on the U.S. election came from its climate chief. Reuters reports that Xie Zhenhua said of Trump’s dismissal of the global climate pact: “I believe a wise political leader should take policy stances that conform with global trends.” Meanwhile, one of the biggest areas of international and cross-industry agreement as the climate accord comes into effect is the need for a price on carbon dioxide emissions, reports Leigh Collins, editor of Recharge News (my former employer), from Paris. Xie notes in the Reuters dispatch that China’s carbon trading market is set to begin next year. In Washington state, voters will decide on what would be the nation’s first carbon tax plan—voting on I-732, which would tax greenhouse gas emissions, but reduce other taxes to be “revenue neutral.” The initiative has divided the environmental community over issues including social justice and the state’s regressive tax structure. David Roberts got all the details in this piece a couple of weeks ago in Vox. Money was streaming in on both sides of the issue in the week before the election and polls showed a close race, reported Hal Bernton in The Seattle Times. (We will be talking over the policy implications of the election with experienced Northwest cleantech investor Kirk Washington and University of Washington history professor and author Margaret O’Mara at Xconomy Intersect, our next Seattle tech event, coming up on Dec. 8. They are part of a great lineup of experts on topics from the forests and vineyards of the Northwest to cloud computing, healthcare IT, and machine learning. More information and a full list of speakers here.) —Corporations, and tech companies in particular, continue to drive demand for renewable energy. This week, Amazon Web Services announced an Ohio windfarm with generating capacity of 189 megwatts. When it comes online a little more than a year from now, it will feed clean energy into the power grid that serves two major AWS data centers in Ohio and Virginia. AWS is aiming to procure all of the electricity used to power its datacenters from renewable sources. The company says it’s on track to hit 50 percent renewables by the end of 2017. It currently has some 180 megawatts of wind and solar in production in the U.S. and another 569 megawatts of wind energy in planning or under construction. In a statement, AWS vice president of infrastructure Peter DeSantis says of the 100 percent renewable energy goal: “There are lots of things that go into making this a reality, including governments implementing policies that stimulate cost-effective renewable energy production, businesses that buy that energy, economical renewable projects from our development partners and utilities, as well as technological and operational innovation that drives greater efficiencies in our global infrastructure. We continue to push on all of these fronts to stay well ahead of our renewable energy goals.” —If you need more last-minute election information, there’s an Alexa skill for that. Rhiza, with offices in Pittsburgh and Seattle, helps media companies and marketers make better use of data. It baked some of its technology into My Pundit, a chatbot that answers election questions via devices powered by Amazon Alexa. My Pundit draws data from sources including RealClearPolitics and The Washington Post. —The three biggest research powerhouses in Washington—its public universities and Pacific Northwest National Laboratory (PNNL)—pledged to work together more closely in areas including clean energy, computing, and materials science. The memorandum of understanding signed by the top leaders of University of Washington, Washington State University, and PNNL, builds on a range of ongoing research collaborations including smart energy management in buildings and across campuses, and the Northwest Institute for Advanced Computing. They pledged to create more joint faculty appointments at PNNL and the universities, and to create new opportunities for students at PNNL facilities around the state. —Good reads at the end of truth, and the beginning of the robots: Farhad Manjoo, “How the Internet Is Loosening Our Grip on the Truth,” and Timothy Egan, “The Post-Truth Presidency,” both in The New York Times. Ellen Gamerman, “Love in the Time of Robots,” in The Wall Street Journal. “Policing Police Robots,” an in-depth review of the legal and policy issues “of a future where police robots are sophisticated, cheap, and widespread,” by UC Davis Professor of Law Elizabeth Joh, in the UCLA Law Review. Reprints | Share:           UNDERWRITERS AND PARTNERS                                 4 Nov
Houston SheHacks Event Encourages Women to Form, Lead Tech Startups - Houston—Rebellion Photonics co-founder and CEO Allison Lami Sawyer said she’s tired of being alone when attending technology conferences full of men. “Where is the pipeline of women behind me?” she said. “I want to see more women CEOs.” Sawyer’s remarks on Sunday were given to an assembled crowd attending SheHacks, a hackathon geared towards women in Houston. Her startup, Rebellion Photonics, served as the host for the coding weekend, which brought together founders, programmers, and other women interested in tech startups. In total, eight projects were approved by organizers, startup ideas including websites to rent or buy formal dresses or kids equipment, an online game to encourage girls’ interest in chemistry, and a website that can connect busy adults to events happening in their cities. By Sunday evening, the teams made their pitches before a panel of judges that included Carolyn Rodz, founder of the Circular Board, and Chevron Technology Ventures president Barbara Burger. The top two winners of the competition were EllieGrid and Poshare, which both had some traction and customer acquisition. EllieGrid has a “smart” pillbox that allows users to scan medication labels so that an app can give them alerts about missed dosages or if a medicine needs refilling. Poshare is a website where women can rent or buy dresses for formal occasions. The site is specifically targeting the bridesmaid market. In addition to those two startups, which will receive prizes in the form of mentoring, co-working spaces, and other services, the judges also said they wanted to recognize a few of the projects that were started from scratch at the beginning of the weekend: ShAIR is a website where parents can lend or borrow kids’ equipment and gear (think renting a playpen when you arrive at your in-laws’ city instead of dragging it through the airport). The second project is called PlanIt, a website that serves as a one-stop-shop dashboard for group travel that can share air, hotel, and other travel information. The hackathon was the first SheHacks event held outside New York, which has hosted two hackathons this year. (SheHacks itself is the brainchild of two New York women who worked in finance and couldn’t connect with a woman developer to help them with an app project.) According to the group WomenWhoTech, women-led ventures receive only 9 percent of seed stage investment and 13 percent of early stage venture capital. The group says so far they’ve hosted more than 100 women who pitched 23 new business ventures. For Sawyer at Rebellion, the idea is if entrepreneurship looked like SheHacks, the tech industry wouldn’t have a diversity problem. And, in Houston, last weekend, I did notice the, well, diversity in the group: East Asians, South Asians, African-Americans, and a few women from Iran. As their teams formed and worked together on their projects over the weekend, mentors from Houston’s tech community helped them refine concepts, better target markets, and polish their pitches. Sawyer urged the attendees not to get discouraged from pursuing their ideas even as everyday life’s stresses and responsibilities begin to demand attention. “Rebellion didn’t happen overnight,” she said of her startup that sells a hyperspectral camera to detect potentially dangerous gas leaks in real time. “It was a series of small bursts of courage. This is one of those.” Reprints | Share:           UNDERWRITERS AND PARTNERS                                 25 Oct
Water Metering Startup Wins Vote for “Best Demo” at EvoNexus Event - A San Diego civil engineer who has developed an alternative technology for automating the measurement of water flow gave the best presentation at EvoNexus Fall Demo Day, an event that drew nearly 400 entrepreneurs, investors, and supporters late Thursday to Qualcomm’s corporate headquarters. Water Pigeon co-founder Sarp Sekeroglu got the most votes in a survey conducted immediately after six startups incubating at EvoNexus gave short presentations. EvoNexus operates as a free incubator (with “no strings attached”) for tech startups at three facilities in San Diego and Orange County. It is supported by the private real estate developer Irvine Company, with industry support from strategic partners like  Qualcomm (NASDAQ: QCOM), ViaSat (NASDAQ: VSAT), and Cisco Systems (NASDAQ: CSCO), as well as other technology companies, banks, and service providers. Water Pigeon came up with a way to enable automatic readings of water meters that doesn’t require a water utility to replace all of its existing water meters or to build a private wireless network to collect data. Instead, Water Pigeon has embedded its technology on the inside lid of water meter boxes. Instead of replacing each legacy water meter with a smart meter, Sekeroglu says a utility would simply replace the lid. A camera in each lid regularly takes an image of the water meter and uses optical character recognition technology to convert the image to data that is transmitted by a wireless LTE modem to the local water utility. The company is beginning five pilot projects to test its technology, Sekeroglu said. He estimated that the cost of implementing Water Pigeon’s technology for a water utility in the San Diego area—by replacing 7,300 water meter lids—would take about six months and cost about $2 million. Replacing 7,300 water legacy meters with “smart meters” that cost $300 to $500 apiece would cost about $5.5 million and take five years, Sekeroglu said. Here is a quick recap of the other presentations at the event: —Trials.ai, presented by co-founder and CEO Kim Walpole, has developed cloud-based software intended to help contract research organizations and pharmaceutical companies manage their clinical trials. The startup uses artificial intelligence-related technologies to help manage the process. (Trials.ai was founded as Catalyst eClinical.) —Guru is developing technology for museums and other cultural institutions and creates engaging educational experiences that can be viewed on a variety of mobile devices. In a little over a year, Guru has become cash-flow positive, said co-founder and chief creative officer Paul Shockley. —Clics has developed a computer-controlled system for formulating and dispensing hair colors at professional salons. Co-founder and CEO Charles Brown said the Web-based system would save salons thousands of dollars each year by providing more precise measurements. Clics also avoids the environmental costs associated with disposing of hair color tubes in landfills and the costs of treating chemical pigments at wastewater treatment plants, he said. —BluAgent, presented by co-founder and CEO J.C. Mejia, provides software that enables trucking companies to simplify and manage all safety and compliance regulations mandated by federal, state, and local government agencies. —Approved, presented by co-founder and CEO Andy Taylor, is developing Web-based software that enables any home mortgage lender to streamline the loan approval process. The system allows customers to automatically collect their supporting documents and shorten the loan application process from one or two weeks to a few minutes, Taylor said. Reprints | Share:           UNDERWRITERS AND PARTNERS                                 24 Oct
Xconomy’s Disruptors Houston Conference Showcases Texas Innovators - Houston—In just a few days, Xconomy’s Disruptors conference will bring some of the brightest innovators in Texas together. The daylong forum, being held Thursday at the Texas Medical Center’s TMCx accelerator in Houston, features innovation in healthcare, space, transportation, energy, artificial intelligence, and other sectors. Here are some of the highlights: —Edward Jung, co-founder and CTO of Intellectual Ventures in Seattle. The firm is known for its accumulation of patents and was co-founded by former Microsoft CTO Nathan Myhrvold. —Being innovative is one thing; doing it in space or at the South Pole adds another degree of difficulty. Former NASA astronaut Scott Parazynski will talk about his work on space missions and as chief medical officer for UTMB’s Center for Polar Medical Operations. —Advances in robotics and virtual reality are becoming an increasingly important part of the way we live, work, and play. Morris Miller, the CEO of Xenex in San Antonio, will talk about how the startup’s robot is helping to kill bacteria in hospitals (and demonstrate how the robot works), while Jan Goetgeluk, the founder of Virtuix, will talk about his Omni treadmill and the growing virtual reality market. —Houston is the energy capital of the world. We’re bringing together four seasoned Houston investors (Alex Rozenfeld, Chris Robart, Chip Davis, and Kirk Coburn) to discuss the trends in software, hardware, sensors, and other devices that can help to better find and extract oil—especially in this challenging economic environment. —Universities have historically had a dual mission: teaching and research. Technology pioneer Bob Metcalfe, the Ethernet inventor, venture capitalist, and now a professor at the University of Texas at Austin, believes there should be third leg to that stool: innovation. Reprints | Share:           UNDERWRITERS AND PARTNERS                                 24 Oct
From Automatic to Zumper: SF Startups Launch Apps, Raise Cash - Tech startup founders are convinced that any human experience can be duplicated—or improved—by translating it into an app. A flurry of Bay Area startups inspired by this creed raised money or introduced new products this week. —Has your child ever set up a make-believe shop to sell toy snack foods, and implored passing adults to play customer, for hours? Palo Alto, CA-based Osmo supplies an endless line of cartoon animals to fill that role in its new game app, Pizza Co., for kids 5 to 12 years old. The kit includes physical game pieces—ungarnished pizzas and toppings that can be added according to “orders” given by the hungry lion or hippo that appears on an iPad screen. Osmo’s mirror attachment allows the tablet to recognize the game pieces on the table top before it, including a pizza or the toy coins offered to customers as change. The animals protest indignantly if kids get the order wrong or shortchange them. Educational game maker Osmo says Pizza Co. teaches children an array of skills, including basic math, entrepreneurship, purchasing, rudimentary bookkeeping, and satisfying the animals without going broke. Maybe these kids will go on to translate even more of our daily routines into apps. —Zumper is trying to eliminate all those physical tasks involved in the arduous human experience called apartment-hunting, like driving around and filling out applications. The San Francisco-based startup is trying to wrap as many of those tasks as possible into an app, so you can maybe find a new home just by tapping on your mobile phone with your feet up on the couch. The company announced a $17.6 million Series B funding round led by Breyer Capital and Foxhaven Asset Management. Other investors participating included Kleiner Perkins Caufield & Byers and Goodwater Capital. Zumper says it displays a million apartment and house listings a month, nationwide. Hopeful renters can lodge their credit checks, rental history, and background checks with Zumper, and press an “Instant Apply” button to send the information to landlords in a bid for the listings they like. —San Mateo, CA-based Reali also raised money for a real estate marketplace where househunters can bid online. Reali focuses on the process of buying or selling a home, as a self-described “full service real estate broker” within an app. The company aims to make transactions more efficient by matching buyers and sellers, while reducing agents’ commissions. Reali says it launched its listings this week in Bay Area communities, including Palo Alto and East Palo Alto, and plans to extend the service to other Silicon Valley cities shortly. Reali announced a seed financing round that raised $2 million from investors including Joe Montana, other angels, and Dragonfly Investments Group, which was founded by Reali’s co-founders, Amit Haller and Ami Avrahami. —In spite of all the apps that let us manipulate the universe from our couches, many of us still drive around to do stuff. But cars have their own apps now. This week San Francisco-based Automatic, which connects cars to the Internet, launched a less expensive version of its system that collects data such as trip logs and sends it to a smartphone app. GPS-equipped Automatic Lite plugs in under the car’s dashboard and records trip mileage that employers can reimburse; taps into signals of engine trouble; and directs drivers to the cheapest nearby gas station. The new product costs about $80. The company’s Automatic Pro costs about $130, and it includes extra features such as crash alerts and car location tracking. —Verdigris has a mobile app for building managers who need to keep energy costs down. The Mountain View, CA-based startup installs sensors on building mains, panels, and branch circuits to meter power consumption at the level of each office space. The system can be used to identify faulty equipment such as broken motors; to verify vendors’ claims on energy efficiency; and to plan upgrades, Verdigris says. The company says it raised a $6.7 million Series A extension round led by St. Petersburg, FL-based engineering and supply chain management company Jabil (NYSE: JBL), joined by Verizon Ventures, Stanford StartX Fund, and previous angel investors. Verdigris has now raised a total of $16 million. Data from Verdigris’s energy-use monitoring is sent to the cloud via  Wi-Fi or Verizon’s cellular network.     Reprints | Share:           UNDERWRITERS AND PARTNERS                                 21 Oct

No comments: