Monday, May 13, 2013

Dating


Olympiad dating:
Among the ancient Greeks, a common method for indicating the passage of years was based on the order of Olympic Games, first held in 776 BC. The pan-Hellenic games provided the various independent city-states a mutually recognizable system of dates. The first Olympiad also marks the traditional beginning of Greek historical civilization and record-keeping, and it continues to be regarded as the end of Western prehistory and the beginning of its historical epoch.
This system was in use from the 4th century BC until the 3rd or 4th century AD.

Seleucid era:
The Seleucid era was used in much of the Middle East from the 4th century BC to the 6th century AD, and continued until the 10th century AD among Oriental Christians. The era is computed from the epoch 312 BC: in August of that year Seleucus I Nicator captured Babylon and began his reign over the Asian portions of Alexander the Great's empire. Thus depending on whether the calendar year is taken as starting on 1 Tishri or on 1 Nisan (respectively the start of the Jewish civil and ecclesiastical years) the Seleucid era begins either in 311 BC (the Jewish reckoning) or in 312 BC (the Greek reckoning: October–September).

Consular dating:
An early and common practice was Roman 'consular' dating. This involved naming both consules ordinarii who had taken up this office on January of the relevant civil year. Sometimes one or both consuls might not be appointed until November or December of the previous year, and news of the appointment may not have reached parts of the Roman empire for several months into the current year; thus we find the occasional inscription where the year is defined as "after the consulate" of a pair of consuls.
The use of consular dating ended in AD 541 when the emperor Justinian I discontinued appointing consuls. The last consul nominated was Anicius Faustus Albinus Basilius. Soon afterwards, imperial regnal dating was adopted in its place.

Dating from the founding of Rome:
Another method of dating, rarely used, was anno urbis conditae (Latin: "in the year of the founded city" (abbreviated AUC), where "city" meant Rome). (It is often incorrectly given that AUC stands for ab urbe condita, which is the title of Titus Livius's history of Rome.)
Several epochs were in use by Roman historians. Modern historians usually adopt the epoch of Varro, which we place in 753 BC.
The system was introduced by Marcus Terentius Varro in the 1st century BC. The first day of its year was Founder's Day (April 21), although most modern historians assume that it coincides with the modern historical year (January 1 to December 31). It was rarely used in the Roman calendar and in the early Julian calendar — naming the two consuls that held office in a particular year was dominant. AD 2013 is thus approximately the same as AUC 2765 (2013 + 753 - 1, as there was no year AD 0).
About AD 400, the Iberian historian Orosius used the AUC era. Pope Boniface IV (about AD 600) may have been the first to use both the AUC era and the Anno Domini era (he put AD 607 = AUC 1360).

Maya:
A different form of calendar was used to track longer periods of time, and for the inscription of calendar dates (i.e., identifying when one event occurred in relation to others). This form, known as the Long Count, is based upon the number of elapsed days since a mythological starting-point. According to the calibration between the Long Count and Western calendars accepted by the great majority of Maya researchers (known as the GMT correlation), this starting-point is equivalent to August 11, 3114 BC in the proleptic Gregorian calendar or 6 September in the Julian calendar (−3113 astronomical).

Anno Domini - AD

The Anno Domini dating system was devised in 525 by Dionysius to enumerate the years in his Easter table. His system was to replace the Diocletian era that had been used in an old Easter table because he did not wish to continue the memory of a tyrant who persecuted Christians.
The last year of the old table, Diocletian 247, was immediately followed by the first year of his table, AD 532. When he devised his table, Julian calendar years were identified by naming the consuls who held office that year—he himself stated that the "present year" was "the consulship of Probus Junior", which was 525 years "since the incarnation of our Lord Jesus Christ".Thus Dionysius implied that Jesus' Incarnation occurred 525 years earlier, without stating the specific year during which his birth or conception occurred.
However, nowhere in his exposition of his table does Dionysius relate his epoch to any other dating system, whether consulate, Olympiad, year of the world, or regnal year of Augustus; much less does he explain or justify the underlying date:778

Blackburn & Holford-Strevens briefly present arguments for 2 BC, 1 BC, or AD 1 as the year Dionysius intended for the Nativity or Incarnation. Among the sources of confusion are:
  • In modern times Incarnation is synonymous with the conception, but some ancient writers, such as Bede, considered Incarnation to be synonymous with the Nativity
  • The civil, or consular year began on 1 January but the Diocletian year began on 29 August
  • There were inaccuracies in the list of consuls
  • There were confused summations of emperors' regnal years
It has also been speculated by Georges Declercq that Dionysius' desire to replace Diocletian years (Diocletian persecuted Christians) with a calendar based on the incarnation of Christ was to prevent people from believing the imminent end of the world. At the time it was believed that the Resurrection and end of the world would occur 500 years after the birth of Jesus. The old Anno Mundi calendar theoretically commenced with the creation of the world based on information in the Old Testament. It was believed that based on the Anno Mundi calendar Jesus was born in the year 5500 (or 5500 years after the world was created) with the year 6000 of the Anno Mundi calendar marking the end of the world. Anno Mundi 6000 (approximately AD 500) was thus equated with the resurrection of Christ and the end of the world but this date had already passed in the time of Dionysius. Dionysius therefore searched for a new end of the world at a later date. He was heavily influenced by ancient cosmology, in particular the doctrine of the Great Year that places a strong emphasis on planetary conjunctions. Dionysius decided that when all the planets were in conjunction that this cosmic event would mark the end of the world. Dionysius accurately calculated that this conjunction would occur in May AD 2000, about 1500 years after the life of Dionysius. Dionysius then applied another cosmological timing mechanism based on precession of the equinoxes (that had only been discovered about six centuries earlier). Though incorrect, many people at the time believed that the precessional cycle was 24,000 years which included twelve astrological ages of 2,000 years each. Dionysius believed that if the planetary alignment of May 2000 marked the end of an age, then the birth of Jesus Christ marked the beginning of the age 2,000 years earlier on the 23rd March (the date of the Northern Hemisphere Spring Equinox and beginning of many yearly calendars from ancient times). He therefore deducted 2,000 years from the May 2000 conjunction to produce AD 1 for the incarnation of Christ even though modern scholars and the Roman Catholic Church acknowledge that the birth of Jesus was a few years earlier than AD 1.


Thursday, April 18, 2013

What Makes a Leader?

It was Daniel Goleman who first brought the term “emotional intelligence” to a wide audience with his 1995 book of that name, and it was Goleman who first applied the concept to business with his 1998 HBR article, reprinted here. In his research at nearly 200 large, global companies, Goleman found that while the qualities traditionally associated with leadership—such as intelligence, toughness, determination, and vision—are required for success, they are insufficient. Truly effective leaders are also distinguished by a high degree of emotional intelligence, which includes self-awareness, self-regulation, motivation, empathy, and social skill.
These qualities may sound “soft” and unbusinesslike, but Goleman found direct ties between emotional intelligence and measurable business results. While emotional intelligence’s relevance to business has continued to spark debate over the past six years, Goleman’s article remains the definitive reference on the subject, with a description of each component of emotional intelligence and a detailed discussion of how to recognize it in potential leaders, how and why it connects to performance, and how it can be learned.
Every businessperson knows a story about a highly intelligent, highly skilled executive who was promoted into a leadership position only to fail at the job. And they also know a story about someone with solid—but not extraordinary—intellectual abilities and technical skills who was promoted into a similar position and then soared.
Such anecdotes support the widespread belief that identifying individuals with the “right stuff” to be leaders is more art than science. After all, the personal styles of superb leaders vary: Some leaders are subdued and analytical; others shout their manifestos from the mountaintops. And just as important, different situations call for different types of leadership. Most mergers need a sensitive negotiator at the helm, whereas many turnarounds require a more forceful authority.
I have found, however, that the most effective leaders are alike in one crucial way: They all have a high degree of what has come to be known as emotional intelligence. It’s not that IQ and technical skills are irrelevant. They do matter, but mainly as “threshold capabilities”; that is, they are the entry-level requirements for executive positions. But my research, along with other recent studies, clearly shows that emotional intelligence is the sine qua non of leadership. Without it, a person can have the best training in the world, an incisive, analytical mind, and an endless supply of smart ideas, but he still won’t make a great leader.
In the course of the past year, my colleagues and I have focused on how emotional intelligence operates at work. We have examined the relationship between emotional intelligence and effective performance, especially in leaders. And we have observed how emotional intelligence shows itself on the job. How can you tell if someone has high emotional intelligence, for example, and how can you recognize it in yourself? In the following pages, we’ll explore these questions, taking each of the components of emotional intelligence—self-awareness, self-regulation, motivation, empathy, and social skill—in turn.
The Five Components of Emotional Intelligence at Work
Evaluating Emotional Intelligence
Most large companies today have employed trained psychologists to develop what are known as “competency models” to aid them in identifying, training, and promoting likely stars in the leadership firmament. The psychologists have also developed such models for lower-level positions. And in recent years, I have analyzed competency models from 188 companies, most of which were large and global and included the likes of Lucent Technologies, British Airways, and Credit Suisse.
In carrying out this work, my objective was to determine which personal capabilities drove outstanding performance within these organizations, and to what degree they did so. I grouped capabilities into three categories: purely technical skills like accounting and business planning; cognitive abilities like analytical reasoning; and competencies demonstrating emotional intelligence, such as the ability to work with others and effectiveness in leading change.
To create some of the competency models, psychologists asked senior managers at the companies to identify the capabilities that typified the organization’s most outstanding leaders. To create other models, the psychologists used objective criteria, such as a division’s profitability, to differentiate the star performers at senior levels within their organizations from the average ones. Those individuals were then extensively interviewed and tested, and their capabilities were compared. This process resulted in the creation of lists of ingredients for highly effective leaders. The lists ranged in length from seven to 15 items and included such ingredients as initiative and strategic vision.
When I analyzed all this data, I found dramatic results. To be sure, intellect was a driver of outstanding performance. Cognitive skills such as big-picture thinking and long-term vision were particularly important. But when I calculated the ratio of technical skills, IQ, and emotional intelligence as ingredients of excellent performance, emotional intelligence proved to be twice as important as the others for jobs at all levels.

HOW SURNAMES BEGAN?

The first clue to who we are and where we came from is our names. Yes, our names are our most personal possession. In many ways they say to the world who we are. They can bring us fortune as well as shame. Our names can give others, rightfully or wrongfully so, a predisposition of whether they will like us or dislike us. Historically, our names are a fingerprint, identification, and perhaps a clue as to who we are and where we came from.

Most names in the United States today come from the Hebrew, German, Latin, Irish, Scottish, and Welsh languages. So it's rather ironic when a new immigrant comes to our country, that they give their children an "American" name, which of course, came from a Hebrew or European origin.

In 325 A.D. the Catholic church outlawed the use of pagan names and names from pagan gods. So the use of Biblical names became the norm. The church went further in 1545 as it made the use of saint's names mandatory before Catholic baptism. As a result, there were only about twenty common names for boys and girls.

Later in the next century the Reformation and Protestant religions rejected Catholic mandates and traditions. So their children were named after New Testament and Old Testament names, rather than saint's names.

Middle names were first introduced by German nobility in the fifteenth century, but did not become common until the seventeen hundreds. A middle name was not common in the United States until after the Revolutionary War. The tradition then was to use the mother's maiden name as the middle name. (Knowing this may be a good first clue when tracing your ancestry around this period. However, nothing is ever always absolute).

There are over 1.5 Million family names and there variants used in the U.S. today. Historical Research Center has researched over 600,000 of these names so far, with thousands being added each year. But there is still a long way to go.

So, Where did all these names come from?

The use of a surname or "family" names started in Western countries at about the turn of the last millennium, 1,000 years ago. As the population grew, it became more difficult for commerce to know who owed money to whom. If Peter was to actually pay Paul, then it became important to know which Peter owed which Paul. So last names, or descriptive names, indicating which Peter or Paul, began. At the time, Peter and Paul did not even know or care that a descriptive name was attached to their first name. Nor was the same descriptive name used with each transaction.

It is generally agreed that Western civilized countries developed names from one of four ways. The most popular, with about 43% of all names falling into this category, were LOCATION NAMES. These surnames came from the town, estates, or city where the person lived. Nobles took on the name of their estate, and passed it down to their sons. The peasants took on the names of their village most often, or a distinguishing geographical characteristic.

Thus names like Atwater, Atwood, Glen, Green, London, Mill, Newtown, Rivers etc. came into being.

The second most common source of names, about 33%, are names coming form KINSHIP or SON OF names. Your know them: Johnson, Peterson, O'Neill, MacLaughlin, Janowicz, Mendelsohn, Sanchez and Bertucci, to name just a few. All meaning son-of or descendant of a first named father. However, this was not as simple as it may have appeared.
Want To Know The Origin Or Your Surname? Search Now

You see, it took a few centuries and a king's decree for the SON OF names to become organized. Here's why. Suppose Peter had a newborn son. Let's say he proudly named him John Peterson. John, a good common first name, and son of Peter (Peterson) as the last name. That's simple enough. However, when John Peterson grew up and had a son of his own he could proudly gave him the first name of James and the last name of Johnson or James Johnson. Why? Because James was a wonderful name for a boy, and by all means, he is the son of John, right? So, his name was James Johnson. But when James had a son he named him Adam Jamison. Which then lead to the surname of Adamson, and so on and so on. And who said tracing your family tree was boring?

So, Peter, Peterson, Johnson, Jamison and Adamson were five generations of direct descendants. This was confusing.

It wasn't until Henry V decreed that surnames had to be included on all official papers that the legal process of standardizing family names began. So Adamson stayed Adamson, at least for a while anyway.

The third most common source of names came from OCCUPATIONAL NAMES. Many people think this is the number one source of name derivation, but actually only 15% of names come from this category. This is how we got the Smith, Miller, Taylor, Cooper, Cook, Farmer names, to name a few. The reason there are so many Smith's, Millers, Taylors, etc. when this is not the most common source of names, was due to immigration.

You see, when Hans Becker arrived on these shores, he Americanized his name to Baker. When the Krawczyk's arrived they changed their name to Taylor. And the French Charpentier family changed their name to Carpenter. So did the Italian Carbone family, change their name to Miner. It's the translated name's that have made the numbers of OCCUPATIONAL NAMES so common.

The last and least popular source of surname creation, only 9%, came from NICKNAMES or PET NAMES. I had a great-grandfather named Redman. This name could have been taken (of given) because someone had a reddish complexion or red hair, for example. A name like Goodman, may have originally described a kind or generous individual. The name Little, Small or Short, named for a small or short man. And, if you have a Stout in your family tree, well...there may have been a reason for that too.

But of the 1.5 Million names in this country, all are not of European origin, and some have been around a lot longer than 1,000 years. Such as CHINESE names dating back to 2800 B.C..

In 2852 B.C. the Chinese Emperor mandated that all names come from a sacred poem. It wasn't even a very long poem either. And since most people wouldn't choose to name their family after a preposition for example, this has lead to about only 1,000 names total, of which 60 are common surnames. So few names to spread around a Billion people today. In the U.S. there are 1.5 Million names used among a population of 1/3 Billion.

AFRICAN Americans did not get their surnames, for the most part, from slave owners, as is popularly assumed.

When slaves were brought to this country they were given a random first name by the new master. They did not have last names. Nor were they allowed to refer to themselves by their African tribal names. Surnames were not used by African Americans until after they were freed from slavery.

Once freed, they did not name themselves after the masters of their misery. For who would want a constant reminder of a miserable past? But rather they chose names that were well known, or from prestigious families in the South. Many of those names were Irish, Scottish, English or Welch.

Even then, last names were not always passed on to the next generation. Often, a name was changed to a more "favorable" one whenever they wanted to. That is until the draft of World War I and the implementation of Social Security made it more difficult to make such random changes.

Unlike English names, which derived mostly form Location and Kinship names, GERMAN names were derived mostly form Occupational names like Kaufman, meaning merchant or Schmidt meaning smith. The second most common source for German names were from colors, such as Braun (brown), Grun (green), Rosen (rose), Roth (red), Schwarz (black) and Weiss (white). Nicknames were the second to last most common source of names followed by location names.

German Jews however were made to take their names by law in the early 1800s. Those who paid certain German officials were given good names and names of beauty. Those who did not pay were given ugly names like Eselskopf meaning ass's head, or Saumagen (hog's paunch), Durst (thirst) or Bettelarm (destitute).

The SCOTTISH had a problem with infant mortality during the Middle Ages. So, if a Scottish father wanted to be sure a son would carry his full name he left nothing to chance...and gave all his sons his same first name. The odds were in his favor this way. He wasn't thinking about the frustrated genealogist descendant that would follow 500 years later.

The Scottish also had a practice of changing their last names whenever they moved. The change would be made to please the Lord of the land. So, your Campbell relation may have really been a Fraser or a MacDonald at different times.

Knowing all this, it may seem amazing that we can trace family names at all. But knowing all this can actually help you sort out a more accurate puzzle of who you are and from where you came.

Is work is worship?

Make Yourself an Expert

“I don’t know what we’d do without him!” That’s what an executive in a Fortune 100 company recently told us about a brilliant project leader. We’ve heard the same sentiment expressed about many highly skilled specialists during the hundred-plus interviews we’ve conducted as part of our research into knowledge use and sharing. In organizations large and small, including NASA, the U.S. Forest Service, SAP, and Raytheon, managers spoke of their dependence on colleagues who have “deep smarts”—business-critical expertise, built up through years of experience, which helps them make wise, swift decisions about both strategy and tactics. These mavens may be top salespeople, technical wizards, risk managers, or operations troubleshooters, but they are all the “go-to” people for a given type of knowledge in their organizations.
Because deep smarts are mostly in experts’ heads—and sometimes people don’t even recognize that they possess them—they aren’t all that easy to pass on. This is a serious problem, both for the organization and for those who hope to become experts themselves. Several professions build apprenticeships into their training systems. Doctors, for instance, learn on the job as interns and residents, under the close guidance of attending physicians, before practicing on their own. But the management profession has no such path. You’re responsible for your own development. If you wish to become a go-to person in your organization but don’t have the time or opportunity to accumulate all the experience of your predecessors, you must acquire the knowledge in a different way. The purpose of this article is to help you do just that.
A Rare Asset Deep smarts are not merely facts and data that anyone can access. They consist of know-how: skilled ways of thinking, making decisions, and behaving that lead to success again and again. Because they are typically experience-based, deep smarts take time to develop. They are often found in only a few individuals. They are also frequently at risk. Baby boomers—some of whom have knowledge vital to their companies—are retiring in droves. And even in organizations where key experts are years from retiring, there are often only a few people with deep smarts in certain areas. If they’re hired away or fall ill, their knowledge could be lost. In some fields, rapid growth or geographic expansion creates a sudden need for expertise that goes far beyond employees’ years of experience. Whatever the cause, the loss or scarcity of deep smarts can hurt the bottom line when deadlines are missed, a customer is alienated, or a process goes awry.
This potential loss to the organization is an opportunity for would-be experts. Deep smarts can’t be hired off the street or right out of school. High-potential employees who prove their ability to quickly and efficiently acquire expertise will find themselves in great demand.
So how do you acquire deep smarts? By consciously thinking about how the experts in your organization operate and deliberately learning from them. Of course, you can’t—and don’t want to—become a carbon copy of another person. Deeply smart people are unique—a product of their particular mind-set, education, and experience. But you should be able to identify the elements of their knowledge and behavior that make them so valuable to the organization. For example, a colleague of the expert project leader mentioned earlier described him as an exceptional manager who could effortlessly solve any technical problem and always got the best out of his people. Initially, the colleague said he didn’t know how the guy did it. But, in fact, with some prodding, he could tell us that the project leader motivated his team members by matching their roles to their interests, offering them opportunities to present to clients, and taking personal responsibility for shortfalls and mistakes, while giving others credit for progress. On the technical front, the project leader used certain identifiable diagnostic questions to understand complex issues.
The admiring colleague could have recorded and mimicked these behaviors—but he didn’t. One reason, of course, is that the expert himself had never articulated his approach to project leadership. He simply recognized patterns from experience and applied solutions that had worked well in the past. It was second nature to him, like managerial muscle memory. The second stumbling block was that the colleague was accustomed to having people “push” expertise to him. That’s how school and formal management-development programs work. But in today’s competitive work world, that model isn’t sufficient. You can’t count on companies or mentors to equip you with the skills and experience you need. You must learn how to “pull” deep smarts from others.
The Right System Let’s look at a specific case, a composite drawn from the many executives we’ve helped to attain deep smarts:
Melissa has been with a large international beer company for more than eight years, having previously worked in a retail outlet that sold its products. She is currently a sales representative, but she has her eye on a regional VP position. In thinking about how to become more valuable to her organization (indeed, to any beverage company), she considers which in-house experts she would like to emulate. George, a general manager who has risen through the ranks from sales, is known as a smart decision maker, an outstanding negotiator, and an innovator. His colleagues say he has a remarkable ability to think both strategically and tactically about the entire business, from the brewery to the consumer, and that he balances a passion for data with in-depth talks with people in the field. In short, he would be an excellent role model.

Wednesday, April 17, 2013

The Science of What We Do (and Don't) Know About Data Visualization


Visualization is easy, right? After all, it's just some colorful shapes and a few text labels. But things are more complex than they seem, largely due to the the ways we see and digest charts, graphs, and other data-driven images. While scientifically-backed studies do exist, there are actually many things we don't know about how and why visualization works. To help you make better decisions when visualizing your data, here's a brief tour of the research.
The Early Years of Understanding Data
While the early days of visualization go back over 200 years, actual research to understand how it works really only started in the 1960s. Jacques Bertin's Sémiologie Graphique (Semiology of Graphics), published in 1969, was the first systematic treatment of the different ways graphical representations encode data. Bertin coined many terms of the trade, such as the mark, which is the basic unit of every visualization, like a bar, line, or circle sector. He also defined a number of retinal variables, which are the visual properties we use to express the data; these include color, size, location, etc.
In the early 1980s, Bertin's work was picked up by researchers in statistical graphics and the nascent field of visualization (which didn't quite have its name yet). William Cleveland and Robert McGill performed experiments to find out which of Bertin's retinal variables were best suited for particular types of data, while Jock Mackinlay built a system that put Bertin's and their work to use to create visualizations from data.
Thanks to Cleveland and McGill, we know that our perception is the most precise when it comes to understanding the location of a mark, followed closely by our ability to perceive length. We're even less adept at perceiving area and orientation, and our ability to distinguish colors is even worse. We can see tiny differences in direction between lines that are almost but not exactly parallel, but we have a hard time quantifying an angle to say how many percent it represents in a pie chart. We can tell fewer than a dozen colors apart when their hues are very distinct, and can precisely compare shades of colors next to each other; but move them apart and surround them with very different ones, and it all goes out the window.
This may all seem interesting, but its practical uses are not obvious. To turn the theory into practice, Mackinlay built a system that assigned data fields to visual variables automatically in a way that optimized readability. Most visualization tools today still don't offer that kind of intelligence, though Tableau's Show Me! feature is built on a very similar idea.
More Knowledge, More Questions
A lot has happened since the 1980s, but there seems to be a bit of a standstill when it comes to understanding the basics. There are many open questions today, and we also realize the gaps and problems with some of the work performed.
As a case in point, Cleveland promoted an idea that he called banking to 45 degrees. The idea is simple: in a line chart, the average slope should be 45 degrees. That makes intuitive sense, since very steep charts tend to look overly dramatic and very flat ones make it hard to see any change in the data at all. Cleveland's recommendation was based on research on how well we are able to compare the slopes of lines. He found that the highest accuracy was achieved when the lines being compared had an average of 45 degrees inclination.
But it turns out that that is not the entire truth. There were some limitations in Cleveland's study that made 45 degrees look like the best option, but it seems that shallower angles are actually better. This was shown in a research paper that Justin Talbot, John Gerth, and Pat Hanrahan published in October 2012 at the annual VisWeek conference. The left line graph below is closer to 45 degrees on average, but the right one, while shallower, has fewer areas that produce large errors (which are indicated by the dark red color).
degrees.jpg
There is more. My former student Caroline Ziemkiewicz and I found that there is a potential interactiontreemaps and node-link diagrams, differ in the way they show the hierarchy. Node-link diagrams use levels (or "above-ness"), while treemaps use nesting. A question asked using a levels metaphor ("Which of the nodes below node D ...") is easier to answer using the node-link diagram, which uses a compatible metaphor, than one asked using containment ("Which of the directories inside directory D..."), which works better with treemaps. The different metaphors are illustrated below, with treemaps on the left and node-link diagrams on the right. between the visual metaphor used to show data and the linguistic metaphor used to ask a question. We found this when looking at visualizations of trees, or hierarchies. The two most popular visualization techniques for this type of data,
treemap .jpg
We only scratched the surface on this, there are many other metaphors that are used in visualization, whether obvious or not. Barbara Tversky and Jeff Zacks found in the early 2000s that lines imply transitions whereas bars imply individual values. The seemingly simple choice between a bar and a line chart has implications on how we perceive the data.
Bizarrely, so does gravity. In our work on metaphors, Ziemkiewicz and I found that people interpreted round shapes as unstable because, they said, they might roll away. But to roll, there must be a force that causes the movement. After studying this effect some more, we found that the points in a scatterplot attract each other, and that they are seemingly pulled down by gravity. We remember points not where they are in the plot, but shift them towards clusters in our memory, and let them drift slightly downwards.
Findings and distinctions in visualization can be subtle, but they can have a profound impact on how well we can read the information and how we interpret it. There is much more to be learned about how visualization works and how best we can represent, analyze, and communicate data.