Skip to main content

Featured Post

We Should Have Already Had This: The Lithium-Ion Battery With Built-In Fire Suppression

  On October 22, 2020, yesterday, Dexter Johnson posted The Lithium-Ion Battery With Built-In Fire Suppression . Within this topic, Dexter Johnson regards a Stanford University research team and the SLAC National Accelerator Laboratory (its former name was the Stanford Linear Accelerator Center [1] ). Johnson stated: Now [Yi] Cui and his research team, in collaboration with SLAC National Accelerator Laboratory, have offered some exciting new capabilities for lithium-ion batteries based around a new polymer material they are using in the current collectors for them. The researchers claim this new design to current collectors increases efficiency in Li-ion batteries and reduces the risk of fires associated with these batteries. [2]   Johnson was saying this: fires are a current Li-ion battery threat that has been realized, but a new design can secure client use-case safety, and this required this battery redesigned. As this technology approaches marketplace entry points, this shall c

On Self-Driving Vehicles, Re-evaluating Automation Levels

On October 18, 2020, Erik Stayton wrote "It’s Time to Rethink Levels of Automation for Self-Driving Vehicles" in the IEEE Society, SSIT (Society on Social Implications of Technology), and the magazine, Technology and Society. On media, regarding the future, Stayton agreed with Lee Vinsel. Regarding standards, Stayton wrote, “As the historian Lee Vinsel argues, standards are not just ways of classifying things. They are also attempts to shape the technological future. Thinking about how the structure of our standards contributes to their use is therefore crucial for making better policy decisions” (Stayton Sept 2020). Stayton was saying this: of the future, technology shall remain a part, but not its complete whole. Against humanity’s centralism is this: in authority, an ultimate media exchange. Without humans, media is another entity’s claimable target, but all human records are the bounty’s contents, too. As a history-based tool, its potency, replacing a human, is not really accurate: the Bible holds related wisdom.

Of similitude, there is a Psalm. This Psalm states, “Like arrows in the hands of a warrior are children born in one’s youth (127:4, NIV)." The Psalmist was saying this: of a warrior, accept arrows’ superior use, and you may prosper with children, but strength. But a human performs a job, so media may be used, very well: children cannot be replaced, either. Even though autonomy is wanted, it is neither a necessary or sufficient human-enabling media attribute. Correct media shall protect skilled labor. For many jobs, supposed autonomous systems overkilling means this: those jobs considered lost were never a long term success. Against this, there is no need: as an example, consider a 1-year or 2-years entry level McDonald’s store role. Even this argument is unnecessary: it is a pathway-job; but it is not because McDonald’s is very unhealthy, but worse, it causes disrepute. Thus, I shall drive my own vehicle, but I shall build a sufficient non – supposed automated vehicle. I mentioned ‘supposed’ autonomy: someone is doing the programming, but he or she ought to be held responsible. After all, someone is paying him or her. 


Reference

Stayton, Erik. “It’s Time to Rethink Levels of Automation for Self-Driving Vehicles.” Technology and Society. September 2020.

  



Comments

Popular posts from this blog

In response to the Institute of Industrial Science, the University of Tokyo’s Circular Reasoning: Spiral Circuits for More Efficient AI

Circular Reasoning On June 14, 2020, the IIS (Institute of Industrial Science) at the UTokyo (University of Tokyo) wrote  Circular Reasoning: Spiraling Circuits for More Efficient AI ; but a Press Release from this institute is giving a synopsis on this topic. On this press release, the IIS wrote, “Researchers from the Institute of Industrial Science at the University of Tokyo designed and built specialized computer hardware consisting of stacks of memory modules arranged in a 3D-spiral for artificial intelligence (AI) applications” (IIS, June 14, 2020). The IIS continued on, saying this research is allowing a singular way work can be done regarding the next generation energy efficient AI devices (Here is a current generation, but energy efficient, AI device, Android Pie: link ) shall be implemented into production.             The Fundamentals of Machine Learning                On this press release, the IIS is explaining the fundamentals of ML (Machine Learning). The IIS wrote, “Mach

IS the Future of AI “Women?”

Interdisciplinary Campus Culture On April 14, 2020, Katy Rank Lev wrote the Carnegie Mellon University’s (CMU) news article The Future of AI is Female . Since artificial intelligence’s (AI) initial measurement, wrote Lev, CMU built AI. Lev wrote that each of the colleges CMU is representing contribute to make AI a new field, describing this AI as a frontier, humanity can democratize: from healthcare, the eventual goal area is education. In a rush, Lev cut the conversation short, and Lev mentioned CMU’s interdisciplinary campus culture as the source of the effective AI women, but this is despite women historically not represented as scientists, technologists, and engineers, and math (STEM), worldwide. But CMU is spotlighting undergraduate students and highly honored faculty members, and Lev is including these women because she agrees with the Women in Tech movement as far as the East is from the West: CMU is the best Computer Science University with AI, and this is IT at this point in

Drone Uses AI and 11,500 Crashes to Learn How to Fly

Learning to Fly by Crashing On 10 May, 2017, Evan Ackerman wrote the IEEE (Institute for Electricians and Electrical Engineers) SPECTRUM article Drone Uses AI and 11,500 Crashes to Learn How to Fly. In Ackerman’s article, Ackerman used a block quote by Carnegie Mellon University roboticists Dhiraj Gandhi, Lerrel Pinto; and Abhinav Gupta, the writers of a paper, “Learning to Fly by Crashing” (Gandhi, et. al., 27 Apr 2017). From Ackerman’s block quote from Gandhi, et. al., “[T]he gap between simulation and real world remains large especially for perception problems” (Gandhi, et. al.). Ackerman contrasted known motion from unconfirmed motion without identifying the pre-existing condition called Schrödinger’s cat in the case that the crashes shall eventually happen without outside help: a continuing crash failure, and in security terms this is interned as a false positive because this helps Schrödinger’s cat stay alive or rest buried in the soil. In this case, this drone detects these t

Contact Form

Name

Email *

Message *