Infographic

The hazards of confirmation bias in life and work [SLIDES]

The human brain isn’t really equipped to process the volume of information that floods it moment-to-moment. Instead, we’ve developed shortcuts that help make sense of all the people, ideas, and thoughts that pass through our consciousness. This is great for mental efficiency because it lightens cognitive load. 

But these mental shortcuts also favor our pre-existing beliefs over the new information, generating an unfortunate side effect called confirmation bias. This bias can creep into pretty much everything we do, and gets magnified online where our searches, news, and social connections tend to align with our current belief systems.

This presentation highlights key concepts from our article about how confirmation bias affects our lives. These slides examine how confirmation bias works for and against us, explore some of the ways it can wrap us in ideological bubbles, and then address how we can burst our own bubbles of bias.

Download full PDF version here.

Two men

“Are you thinking what I’m thinking?” The new science behind brain-to-brain communication.

The evolution of communication throughout history has largely been about taming two dimensions: time and distance. That was as true of spoken language as it was of the telegraph or email. Humans have been so successful at taming these factors that, today, neither time nor distance are meaningful bottlenecks to transmitting your thoughts. 

In fact, if you want to tell a friend living in a different hemisphere what you just ate for lunch, the most time-consuming part of the process is physically typing or tapping the message. Communication these days can be so fast that it’s fair to consider it “instantaneous,” though technically it’s not quite there yet. Because with all of the messaging we do in a day, all of that typing and tapping adds up. 

We still rely heavily on fingers and thumbs to get the message across, but for how long will this be the case? It’s too soon for anything but speculation, but research groups and corporations are working on new technologies that might allow us to share our ideas using thoughts alone. Flying cars, walking-talking robots, and now sci-fi telepathy? It’s a fascinating time to be alive.

But are we ready for a technology like this? Imagine being able to think a thought around the globe to anyone you wanted. No typing, speaking, or writing required. The technology required to achieve this is already in the labs, with proofs of concept being unveiled with increasing frequency.

The main hurdle to developing this technology is the brain itself, in all its fine and complex detail. So let’s start with a look at what we know (and don’t) about how the brain processes language, then dive into examples of primitive brain-to-brain and brain-to-machine communication devices moving through labs around the world.


Understanding what the brain is saying

Rattling around in the skull are around 100 billion neurons with more than 100 trillion synaptic connections between them, all arranged in a way that is as unique as a fingerprint. That being said, there is also a great deal of commonality between human brains. The look and shape remain relatively consistent, as do the abilities and tasks that certain brain regions specialize in. For instance, the signals from the eyes travel to the back of the brain, where they are decoded and analyzed by the visual cortex. The motor cortex runs from the temple to the top of the head and fires up when body parts need to move. 

The commonality between brains has allowed its regions to be reliably mapped—call it human cerebral cartography. The areas corresponding to the tongue and mouth map to the sides of the cortex, near the temple. The area corresponding to the feet can be found at the top of the brain, nestled in between the two hemispheres. And so on.

The complexity of the brain is found not only in the sheer number of neurons but in their variety. There are neurons that respond to specific directions of movement and to certain shapes, for instance. It gets far more complex when we look at neurons related to appreciating art, anticipating chess moves, or converting our ideas into discrete sentences. For these actions, the brain engages in some incredibly complex calculations.

The brain learns how to interpret the world around it as it grows. And, again, this is a highly personalized process. While the brain routes certain types of sensory experience to the same general areas in all brains, the individual determines how we end up processing specific experiences and how we learn to interpret them. 

Which brings us to language. The word “cat” won’t be found in the same place in your brain as it will in someone else’s, as you both learned it in a different way and with different experiences. This is an important detail, because brain-to-brain communication in any form requires a device capable of interpreting a given pattern of neurons as a particular idea, thought, or concept. This is an almost inconceivably difficult task. 

Neurons are the basis of communication within the brain, but they are just one of the key cell types. In the cortex, where all the higher-level activities that we associate with human thought take place, there are neurons and their constituent parts—long dendrites and axons that can extend to other parts of the brain—along with glial cells and blood vessels. It’s a tightly packed jumble, and any reading and interpreting of its signals must reliably account for all of these elements

Technology and reading the mind

As it stands today, the best ways to measure brain activity are also the most invasive. For brain-to-brain communication to attain feasibility, other means of interacting with the brain are required. Ideally, the hope would be to spy on the activity of individual neurons without having to open the skull. 

One current method for reading brain signals is functional magnetic resonance imaging, or fMRI. This is the familiar large tube that requires a patient to lie unmoving inside it, while it measures the blood flow in certain areas of the brain. It’s cumbersome, crude, and isn’t likely to be useful in brain-to-brain communication.

Another common brain scanning technology is the electroencephalogram, or EEG. This technology measures the activity of an area of the brain using electrodes placed on the skull. It’s not going to read neuronal signals in high detail and, while better than sitting inside a machine, requires the use of a skull cap and cables.

The methods that record brain activity in higher detail are also the most invasive. Typically, they require opening the skull and inserting tiny wires into the brain, which pick up the signals generated by neurons. Which isn’t easy. There is not a lot of structural uniformity in the brain given upwards of 100,000 neurons and other types of cells in a single square millimeter of cortex tissue. And since the brain forms scar tissue around any foreign object it detects, brain receptor devices would need to be designed in a way that convinces the brain these objects are native and natural.

If all of this suggests brain-to-brain communication is practically insurmountable, we’re only getting started. We’ve only focused on one side of the equation—reading signals. If we want another, unique-as-a-snowflake brain to be able to interpret those signals, we would need to be able to stimulate neurons, which creates another set of engineering challenges.

Recent advances are making this less theoretical

But let’s not count science out, or call this concept a pipe dream. Because there are already working examples of some potential precursor technologies.

One such idea is neural lace, which is a fine mesh containing electrodes that can be injected into brain tissue. The neural lace is rolled up and placed in a tiny glass syringe, and expands once released into the desired area, where it can then record surrounding activity.

Syringe-injectable electronics like neural lace have been successfully implanted in rodents. The researchers applied the lace to two areas of an anesthetized rat’s brain, including the hippocampus, an area central to forming memories. Once there, it was able to record neuronal activity and, equally promising, it didn’t prompt an immune-system response.

While installing neural lace would still require an invasive procedure, the study’s authors noted, “Compared to other delivery methods, our syringe injection approach allows the delivery of large (with respect to the injection opening) flexible electronics into cavities and existing synthetic materials through small injection sites and relatively rigid shells.”

Another brain interface concept is neural dust—dust-sized silicon sensors that can read activity and stimulate nerves and muscles. The devices rely on ultrasound generated by a small external device. Ultrasound can relay information between the dust and external devices wirelessly, and can reach almost anywhere in the body. What’s more, the dust can convert ultrasound vibrations into electricity that can then power an on-board transistor, eliminating the need for batteries while maintaining the ability to stimulate nerve fibers.

Researchers estimate they can shrink ultrasonic, low-power neural dust down to half the width of a human hair. As of now neural dust devices are capable of functioning within the peripheral nervous system, but further advances will need to be made before they are small enough to use in the brain.

Not without their limitations, these technologies hint at the possible improvements that can be made to current methods. While it may be some time before we develop true brain-to-brain communication, a few game-changing innovations might be just around the corner, and a few key pieces are already in place.

The current state of brain-to-brain communication

There are two sides to the brain-to-brain communication story: reading the signals from one brain and introducing them comprehensibly into another. We have working examples for each side of the equation. 

For instance, cochlear implants convert sound signals into electrical signals, which by way of an electrode array can stimulate the auditory nerve within the ear, allowing people to hear. A similar process happens in retinal implants, which sit at the back of the eye and stimulate the nerves in response to light signals. The nerves within the eyes and ears are, however, quite different from those in the cerebral cortex, both in terms of complexity and accessibility.

There are people who have lost limbs, or lost the use of their limbs, that have learned to control robotic arms or mouse cursors. Juliano Pinto, for instance, became a paraplegic after a car accident in Sao Paulo, but thanks to a large exoskeleton and an EEG cap, he was able to kick a football for the opening of the 2014 World Cup. 

When it comes to sending signals to other brains, there are a few working examples. For instance, one non-invasive brain-to-brain method was demonstrated in an experiment that successfully allowed a man’s thoughts to wiggle the tail of an anesthetized rat. When the man thought a specific thought, the signal would stimulate an area of the motor cortex in the rat, which caused its tail to swing.

This is a long way from transferring language and ideas, of course. In that realm, an experiment with human subjects took place in which two people had their brain signals for “hola” and “ciao” converted to binary and transmitted between India and France. The method was cumbersome, however, as the receiver would experience the signal as flashes of light in the corner of their vision, not as the actual word being transmitted.

Movement appears to be an easier activity to transfer between brains, as the neuronal signatures are easier to spot and interpret. Language is complex and detailed. Most adults have an approximately 42,000-word vocabulary, which is a lot of neuronal signals to be interpreted if we’re going to start having brain-to-brain conversations—and that’s before the added complexities of context and humor and sarcasm.

We’re not quite ready for this

There’s something magical about the brain that makes it more than just another engineering challenge. So perhaps it’s the sci-fi appeal of transmitting messages with our thoughts that has our collective imaginations running wild. But there are also some deep challenges to us humans adopting a technology like this. 

Take just one consideration: identity. Consider that having another person’s voice not heard but experienced in your mind just as if it were your own thoughts, could have striking effects on your sense of self and identity. If your thoughts merge with everyone else’s thoughts, where does your “I” begin and end? Not to mention all the opportunities for hacking and spying and other hijinks. 

There’s also the chance that we will have these mental conversations not just with other people, but with bots and machines. If we can call upon the wealth of knowledge available on the Internet and have it naturally communicated to us as easily as recalling a memory, we may see it—perhaps legitimately—as an augmentation and extension of our selves, not of some external source of information. If we think fake news and spam are a problem now… 

Are we at all prepared for such a fundamental change to communication itself? How quickly can people adapt to such an ability in ways that are positive and healthy for the individual and society? For now, and for a long time to come, these ideas will remain kindling for the fire of imagination. But it’s a blazingly fascinating fire.

Brienne

Entefy co-founder Brienne talks intellectual property with Wharton MBA students

As a growing company, Entefy seeks opportunities to share insights with other entrepreneurs. Last week, co-founder Brienne joined Daniel Peterson, Intellectual Property attorney at Blank Rome LLP and friend of Entefy, to give a talk to Wharton MBA students at their San Francisco campus. The duo presented their thoughts on how the ownership of IP assets plays into entrepreneurship and the startup journey. 

It was a beautiful evening for a visit to Wharton’s campus overlooking the Embarcadero and Bay Bridge. Brienne and Daniel were warmly welcomed by Professor Laura Huang and students before jumping into their presentation. Daniel provided an overview of intellectual property and Brienne discussed Entefy’s filed and issued patents, trademarks, and copyrights. The students asked questions based on their personal experiences and, as the event drew to a close, they met one-on-one with Brienne and Daniel.

Entefy promotes courageous innovation and entrepreneurship around the world. We welcome the opportunity to share our team’s experiences and discoveries with more groups like Wharton’s MBA students who are passionate about business and technology.

Infographic

Skyrocketing video consumption [VIDEO]

Take a wild guess at the number of YouTube videos watched every day. The number is astounding—we’re talking billions and billions. And there’s a reason for it.

Each day, 10 to 20 billion YouTube videos are consumed and the average adult watches 76 minutes of digital video. By 2019, video is expected to account for 80% of all consumer Internet traffic. Video is not only here to stay, but for many people it accounts for the majority of their online activity. 

This video enFact explores reasons why everyone is watching so much video these days. The answer lies in how our brains process different inputs like video and text. 

Read the original version of this enFact here.

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you. 

Referee

What would professional sports look like with AI referees and other smart tech? [SLIDES]

Does the idea of watching sporting events with AI referees sound futuristic? It certainly might. But when you take a look around the world of professional sports—football, soccer, fencing, basketball—advanced technologies are already having an impact on the roles of referees, coaches, players, and fans.

In fact, “precursor” technologies that provide the sensory input data for yet-to-be-invented AI algorithms are already in use. In some sports, athletes’ uniforms feature wearable devices and refs are using smart technologies to call plays. Technology looks likely to have a serious impact on how games are played and watched.

This presentation highlights key points from our article about how AI and other smart technologies might impact the future of professional sports. These slides provide an overview of the systems in use today, the rapid implementation of new smart technologies, and what fully automated refereeing might look like. 

Download full PDF version here.

Man holding light bulb

Entefy Patent Advances the State-of-the-Art in Digital Communication

Entefy’s latest patent unlocks new possibilities in managing digital conversations across apps and services 

PALO ALTO, May 10, 2017 – Entefy Inc. announced today that the company was recently issued a new patent by the U.S. Patent and Trademark Office (USPTO). Patent No. 9,639,600 describes a “System and method of personalized message threading for a multi-format, multi-protocol communication system.”

Entefy’s new patent advances the state-of-the-art in digital communication by allowing intelligent threading of conversations across multiple protocols and channels of communication. This innovation helps the Entefy platform make smarter connections about the meaning and context of messages, regardless of where those messages came from or how they were first transmitted.

“With so many apps and services, it can be difficult to keep track of your conversations. This Entefy innovation helps make sense of the conversations we have with other people even when they’re spread out over time and across different apps or services,” said Entefy CEO Alston Ghafourifar. 

Today’s announcement is the latest patent announcement from Entefy in 2017. In March, the company announced the issuance of a new patent covering encrypted search. That patent strengthened the data security and search capabilities of Entefy’s core technology, deepening the company’s capabilities in protecting user privacy and data. In January, Entefy announced the filing of a group of 13 new patents in artificial intelligence (AI), security, and cyber privacy.

Entefy’s universal communicator simplifies everyday interactions between people, services, and smart things to help you live and work better in today’s digital world.

ABOUT ENTEFY

Entefy is building the first universal communicator—a smart platform that uses artificial intelligence to help you seamlessly interact with the people, services, and smart things in your life—all from a single application that runs beautifully on all your favorite devices. Our core technology combines digital communication with advanced computer vision and natural language processing to create a lightning fast and secure digital experience for people everywhere. 

Man reading a book

Multitaskers lose, unitaskers win

As automation and artificial intelligence advance rapidly, there is naturally a lot of discussion about how these technologies will impact work. It’s surprising how important multitasking is to this discussion. Why? Because the jobs least likely to be automated will be those that demand focused high-level thinking. In fact, the World Economic Forum’s “Future of Jobs” report predicts that active learning, critical thinking, creativity, and mathematical reasoning “will be a growing part of the core skills requirements for many industries.” 

The author and computer science professor Cal Newport shared an analysis of the multitasking challenge in a nutshell: 

“Knowledge workers dedicate too much time to shallow work — tasks that almost anyone, with a minimum of training, could accomplish (e-mail replies, logistical planning, tinkering with social media, and so on). This work is attractive because it’s easy, which makes us feel productive, and it’s rich in personal interaction, which we enjoy (there’s something oddly compelling in responding to a question; even if the topic is unimportant). But this type of work is ultimately empty.”

Empty maybe, but certainly common. After all, for a long time now multitasking has represented the gold standard of productivity. Tapping away at an email while leading a conference call while updating a spreadsheet is the sort of behavior that many people admire and emulate. And technology gives us plenty of tools to make it easy to do so. 

Then science came along and ruined all the multitasking fun. It’s now widely understood that “efficient multitasking” is an oxymoron, in large part because of a concept known as switching costs. Switching costs refer to the time it takes the brain to transition from focusing on one task to a second, generally measured at about a few tenths of a second per switch. Not significant by itself, but those seconds add up when we repeatedly switch from task to task. Do it enough, and research shows that switching quickly between projects can reduce productivity by up to 40 percent. 

Unfortunately, multitasking has become ingrained in our daily habits, and most people struggle to operate any other way. We’ve become so accustomed to using multiple screens, browsing in multiple tabs, and swiping through multiple apps that many of us have forgotten what it is like to focus on one task at a time. 

But if preparing ourselves for how to work best in the jobs of the future is what’s at stake, it’s worth the effort to shift away from multitasking toward…well, let’s call it unitasking. 

The headwinds to unitasking 

Unitasking simply means focusing on one task at a time. Ideally, paired with concentration unbroken by distractions such as emails, phone calls, text messages, or push notifications. These days, however, sustained focused can be elusive. When you’re in the habit of always being “on,” the idea of turning “off” can provoke what one researcher calls FOMO, the fear of missing out. 

To make matters worse, multitasking on our phones, laptops, desktops, and tablets delivers the illusion that we’re getting a lot done. Then add the fact that practically everyone around us at work is doing the same thing. The pressure to be always available and always busy creates a “trap” that we often deliberately fall into, despite the fact that multitasking “is not a necessary or inevitable condition of life; it’s something we’ve chosen, if only by our acquiescence to it.” 

Dr. Christine Grant, an occupational psychologist at the Centre for Research in Psychology, Behaviour and Achievement at Coventry University cautions people about these patterns, saying “The negative impacts of this ‘always on’ culture are that your mind is never resting, you’re not giving your body time to recover, so you’re always stressed. And the more tired and stressed we get, the more mistakes we make. Physical and mental health can suffer.” After all, our brains aren’t designed to handle several tasks at a time. 

And that’s an important point. When we think we’re doing three things at once—getting the oil changed, making social plans via text, and catching up on a podcast, for instance—the brain’s ‘executive system’ in the frontal lobe is actually just shifting our attention rapidly between these activities. One MIT neuroscience professor described it as, “You’re not paying attention to one or two things simultaneously, but switching between them very rapidly.” Aside from a handful of multitask masters, most people can’t do several things simultaneously. The more tasks we attempt, the more poorly we perform all of them. And the more time we waste. 

Estimates for the time it takes to fully regain your focus after getting distracted range from 23 minutes and 15 seconds to 30 minutes. We can lose hours if we’re switching gears several times a day. 

Breaking the multitasking habit 

Curbing the impulse to multitask during work is challenging. Fortunately, there are ways to strengthen our unitasking muscles. Scheduling uninterruptible time for priority tasks is a good start. Whether you’re writing a report, preparing a presentation, or learning to code, you need stretches of undisturbed time to make any real progress. You may choose to do this in the mornings before heading into the office or during the least hectic periods of your workday. The key is to eliminate distractions and let colleagues know not to disturb you unless there’s an emergency. 

“To remain valuable in our economy, therefore, you must master the art of quickly learning complicated things. If you don’t cultivate this ability, you’re likely to fall behind as technology advances,” Cal Newport wrote. Seek opportunities to stretch your abilities and practice making connections between complex concepts. 

Perhaps the most effective step we can take toward a unitasking mindset is being conscious of the choices we make and how we spend our time. Rather than succumb to the constant impulse to check notifications or log onto email, we can be mindful of whether these actions facilitate the deep learning and dedicated thinking that will help us advance. 

As author and focused work proponent Srinivas Rao wrote in an essay about the competitive advantages of deep work, “Nobody ever changed the world by checking email. Significant creative accomplishments require focus, consistency, good habits, and deep work.” The future of work may well be unitasking, and it could usher in an era of more deliberate, conscientious use of technology to enable focus, not shatter it.

Infographic

New meaning to “thinking on your feet”

People speak clearly and intelligibly at around 150 words per minute. At that pace, it takes less than 2 minutes to deliver the 266 words in Abraham Lincoln’s Gettysburg Address

That is at a snail’s pace compared to another form of speech that we all engage in: inner speech. Inner speech is the scientific term for ‘talking to yourself in your head,’ the voice of your conscious thinking. Psychologists have measured the rate at which humans produce this speech at 4,000 words per minute. Or nearly 27x the speed of speaking aloud. To put just how fast that is in context, the Gettysburg Address at the speed of inner speech will loop about 15 times in the span of a minute. 

This really puts some new meaning to just how fast we’re thinking when we’re “thinking on our feet.” 

Entefy’s enFacts are illuminating nuggets of information about the intersection of communications, artificial intelligence, security and cyber privacy, and the Internet of Things. Have an idea for an enFact? We would love to hear from you.

Infographic

Human investors can be irrational, but is AI the answer?

Data indicates that about 55% of Americans are invested in the stock market through direct ownership of shares, stock mutual fund holdings, or retirement accounts. That figure jumps to 88% for households with income greater than $75,000. Many investors trust someone else to handle their investments, like a stock broker or financial advisor. What’s interesting is how quickly that “someone” increasingly refers to an artificial intelligence system. 

Already, some 1,360 hedge funds rely on computer models to trade stocks and other investments. These funds represent $197 billion dollars of investor money being directed by lines of computer code. Most of these funds represent traditional “quant” (quantitative) funds that use computer models to predict share price movements and determine trades.  

But an increasing number of hedge funds are entirely directed by AI-powered trading engines. These funds are at the vanguard of the use of AI in financial markets. And like many markets where AI is transforming business as usual, its use in investment markets represents innovative new investment products while simultaneously raising new questions. 

The growth in AI-directed investing could have radical consequences. Especially in a scenario where a single investor or investment fund using proprietary AI is able to secure an unfair advantage over other market actors. Call it the stock market singularity. And the groundwork has already been laid.

Financial neural networks resurrected

The investment world began looking at artificial intelligence in the 1990’s. The focus then was on artificial neural networks (ANN), computer algorithms modeled after the connections that power the human brain. ANN can be thought of as a predecessor to today’s machine learning systems, computers that self-modify by learning from massive data sets. Neural networks were expected by some to transform trading in the 1990s, but the revolution never came. Their legacy, programmatic trading, lives on. 

Programmatic trading is computer-controlled investing using algorithms to perform the roles of traditional investment professionals, like spotting opportunities, managing risk, and making lightning-fast trading decisions. This approach shifts a lot of decision-making onto computers, but the technology by itself hasn’t given one market actor an unfair gain over the others. 

Today’s neural networks represent a significant increase in functionality over programmatic trading systems. The technology is already becoming mainstream – neural networks are used in voice-activated assistants and self-driving cars. Investment funds want to leverage these increasingly sophisticated systems to achieve faster, smarter trades and better yields. 

To see why automated AI trading systems might generate a lot of unprecedented challenges, we need to walk through a few ideas. 

On the path to an AI super-investor

Ever wonder why insider trading is illegal? It’s illegal because a single person with information not readily available to other investors has an unfair advantage. This is, in turn, significant because share prices should, in theory, reflect all of the information available about a company. Having knowledge that other participants don’t have creates the opportunity to trade (buy or sell) shares in anticipation of the price change that happens when that information becomes widely known. Knowing in advance that, for example, a company’s quarterly sales will be unusually strong allows that person to buy shares at a price lower than they will trade when that information becomes public. Insider trading and other rules that restrict investment activities of investors with special access to information are one way to enforce fairness among market participants.

Getting back to automated AI systems, we next need to get theoretical. It’s possible to imagine an AI system that’s a perfect predictor of a single financial variable like, say, interest rates. Another system might develop infallible inflation predictions. A third might get really good at predicting earnings growth in a particular industry. And so on. 

If these individual systems are possible, it’s within reason to imagine a single system made up of several of these specialist systems. And though it’s far outside the limits of today’s technology, it’s at least plausible that that single system could make use of countless specialist systems that are near-perfect predictors of all of the key factors that move markets. That single system would, by this logic, be virtually perfect at predicting price changes in any—or all—tradeable assets. 

Now picture this AI system in the hands of a single investor. 

The invention of the perfect AI super-investor is not exactly here yet. But this is the sort of consideration that rapid advances in artificial intelligence require. Because if there’s a chance such a perfect system could come into existence—for the benefit of one individual or group at the detriment of everyone else—it’s worth having a conversation about what to do about it. Luckily, the financial singularity conversation has already started. 

Hedging bets on AI

There are in general two responses to the financial singularity question. The first is that it’s not actually possible. The second that it wouldn’t actually be that bad.

Representing the first group is Babak Hodjat, the founder of one AI trading fund, who is eager to see trading in the hands of AI. “It’s well documented we humans make mistakes,” he told Bloomberg. “For me, it’s scarier to be relying on those human-based intuitions and justifications than relying on purely what the data and statistics are telling you.” 

Others dismiss the idea that one company will achieve such advances without competitors close on their heels. “If someone finds a trick that works, not only will other funds latch on to it but other investors will pour money into [it]. It’s really hard to envision a situation where it doesn’t just get arbitraged away,” author Ben Carlson told Wired.

There is also the idea that a financial singularity would be beneficial. By this logic, a market that operates purely on logic could reach perfect efficiency, where all assets are priced correctly with no need for human intervention. Computers would set prices based on optimized projections that include future profits, tech advancements, and demographic shifts, according to Robert J. Schiller, a Yale economics professor.

Schiller is skeptical that a financial singularity lies ahead. He argues that it would have to occur in a world where markets run according to rationality alone. But humans are irrational, and a successful AI would have to account for our unpredictable natures.  

A future worth pondering 

At present, trading algorithms can fake one another out to gain advantages, which the BBC notes is illegal but difficult to prove. They can also predict a slower program’s next moves and then trade accordingly. With firms competing aggressively to get faster trading times, a slower program could create massive functionality gaps. As algorithms become more intelligent and more powerful, the financial industry will require ever-smarter safeguards against exploitation and risk. 

Then there are the potential glitches. In August 2012, a trading program at one fund “ran amok,” creating losses of $10 million a minute. It took nearly an hour for the human team to identify and solve the problem, and the firm lost $440 million in the process. Two years earlier, an algorithmic trade caused a ‘flash crash,’ in which U.S. share and future indices dropped 10 percent within minutes. 

Some say those incidents are telling preludes to disaster. A rogue algorithm at one of the country’s major banks, or a cascading failure in which multiple big banks are derailed by faulty programs, could lead to a catastrophic crash.  

Whether the financial singularity will happen – and whether its impact would be positive or negative – remain to be seen. But we should all be paying attention, because as we witnessed in 2008 with the financial crisis, what happens in the market affects us all. 

Infographic

Inattention: the brain’s complex relationship with social media [SLIDES]

Social media feeds can be a great way to pass the time discovering new information from sources we trust and admire. But repeatedly revisiting those feeds is also perfectly suited to overindulgence because the impulse to “check in” runs on the same mental machinery that drives overindulge in exercise or sweets or coffee.


Then there’s the attention factor. Research suggests that low levels of focus can negatively impact memory formation. There are ways to improve memory retention, starting with understanding how memories are formed. So what does it take to remember?

Entefy curated a presentation based on our article about the brain’s complex relationship with social media. These slides provide a research-driven perspective on how the human brain adapts (and doesn’t) to the unique characteristics of social media technology.