Keynote Speakers

The following distinguished scientists are confirmed to present at the 2016 Conference in Vancouver:


Anna_Brown


Anna Brown
UK

RESPONSE DISTORTIONS IN SELF-REPORTED AND OTHER-REPORTED MEASURES: IS THERE LIGHT AT THE END OF THE TUNNEL?

Monday, July 4 | 8:30 – 9:30 am
Pinnacle Ballroom, 3F

Asking people to assess themselves or others on a set of psychological characteristics is by far the most popular method of gathering data in our field. We use this method either because it is the cheapest, or the best there currently exists for measuring the target characteristic. However, respondent-reported data are commonly affected by conscious and unconscious response distortions. Examples include individual styles in using rating options, inattention or cognitive difficulties in responding to reversed items, tendency to present self in positive light, halo effects, distortions driven by political pressures etc. The extent to which respondents engage in such behaviors varies, and if not controlled, the biases alter the true ordering of respondents on traits of interest. Response distortions, therefore, should concern everyone who uses respondent-reported measures.

This talk provides an overview of research on biasing factors evoked by responding to questionnaire items with different features and in different contexts, discussing the evolution of views on the problem. I will discuss the emerging methods of statistical control, which explicitly incorporate biases in the models of item-level response processes (e.g. Böckenholt, 2012; Johnson & Bolt, 2010). These methods offer a great promise as well as natural limitations in their applicability and scope. Alternatives to statistical control include prevention, and there have been advances in this area too. Special response formats are one of the bias prevention methods, with the forced-choice format being particularly promising. During the past 10-15 years we have acquired methodology that enables modelling forced-choice data. This enables comparing the effectiveness in bias control of the two methods – statistical control versus prevention. I will report latest findings in this regard and share some of my own views and recommendations for the use of these methods depending on the context and stakes of assessments.

I will argue that despite some significant progress, we are still far off bias-proof assessments. In order to create a breakthrough in this area, we must invest in research of test taker cognitions, mixing qualitative and quantitative methods. Few available studies of test taker behavior (e.g. Kuncel & Tellegen, 2009; Robie, Brown, & Beaty, 2007) show that the test takers have conflicting motives, and complex cognitions when it comes to sitting our assessments. Only when we understand these factors, can we hope to create better assessments.

Anna Brown is a psychometrician with an established reputation and extensive industry experience. She was awarded a master degree in mathematics from Moscow State University in 1992. In 1998, Anna joined SHL Group, where she carried out psychometric test development, test adaptation and research, eventually becoming the Principal Research Statistician of the Head Office Research division. Her PhD research, completed in 2010 at the University of Barcelona, led to development of the Thurstonian IRT model, which has been described as a breakthrough in scoring and designing of forced-choice questionnaires, and received the "Best Dissertation" award from the Psychometric Society. Applications of this new methodology include the development of a new IRT-scored version of the Occupational Personality Questionnaire (OPQ32r). Between 2010 and 2012, Anna worked at the University of Cambridge, designing and teaching short courses and summer schools in applied psychometrics under the Researcher Development Initiative of the Economic and Social Research Council (ESRC, UK).

In 2012, Anna Brown joined the School of Psychology at the University of Kent. Currently, she is Senior Lecturer in Psychological Methods and Statistics, teaching at undergraduate and postgraduate levels, and supervising research students. Anna’s research focuses on modelling response processes to non-cognitive assessments using Multidimensional Item Response Theory (MIRT). She is particularly interested in modelling preference decisions, modelling processes contributing to response distortions in self-report measures, and in feedback reports to individuals and organizations.

Anna is an elected member of the ITC Council since 2012, and Chair of the Research and Guidelines committee since 2014. She is also a member of the editorial board for the International Journal of Testing.


FMCheung office


Fanny Cheung
Hong Kong

CULTURALLY-RELEVANT PERSONALITY TESTING IN VOCATIONAL SETTINGS

Monday, July 4 | 2:00 – 3:00 pm
Pinnacle Ballroom, 3F

Personality attributes are important considerations in vocational settings. Personality assessment has been commonly used in personnel selection and development of talents. Understanding one’s personality also facilitates vocational decisions and supports career guidance. Cross-cultural research has recognized that personality measures used for these vocational purposes should be culturally-sensitive to and valid for the local context. In this address, I illustrate the inadequacy and potential biases of imported measures when used without local validation. I then review the literature on the use of the Cross-cultural (Chinese) Personality Assessment Inventory (CPAI-2), an indigenously derived personality measure using the combined emic-etic method, in predicting job performance and organizational behavior in Chinese business settings. Research using the adolescent version of the CPAI also found useful personality correlates of vocational interests, career self-efficacy, and vocational identity among high school students. In addition to the etic personality dimensions, emic traits included in the CPAI demonstrate incremental validity in predicting these vocational variables. Research on the CPAI highlights culturally-relevant personality testing as an important component in the policy and practice of vocational assessment.

Fanny Cheung received her BA from the University of California at Berkeley and her PhD from the University of Minnesota. She is currently Vice-President (Research) and Choh-ming Li Professor of Psychology at The Chinese University of Hong Kong. She is a former President of the ITC (2012-14).

Fanny is the translator of the Chinese version of the MMPI, MMPI-2 and MMPI-A, and developed an indigenous personality measure for the Chinese cultural context, the Chinese Personality Assessment Inventory, which was later extended as the Cross-cultural Personality Assessment Inventory (CPAI-2).

Highly regarded internationally as an expert in cross-cultural personality assessment and gender research with over 200 publications, Fanny’s recent publications include and the 2011 article “Toward a new approach to the study of personality in culture” with Van de Vijver and Leong in American Psychologist (Issue 66, pp.593-603). Her psychology awards include the 2014 International Association of Applied Psychology Award for Distinguished Scientific Contributions to the International Advancement of Applied Psychology, and the 2012 American Psychological Association Award for Distinguished Contributions to the International Advancement of Psychology. Her APA award article, “Mainstreaming culture in psychology” was published in American Psychologist (Issue 67, pp.721-730).


Kurt Geisinger


Kurt F. Geisinger
USA

THE CHANGING NATURE OF LICENSURE AND CERTIFICATION

Monday, July 4 | 9:30 – 10:30 am
Pinnacle Ballroom, 3F

Licensure and certification testing has traditionally been given great deference, with few major legal challenges. This condition, however, is changing rapidly. Once largely the domain of educational testing specialists, the work is increasingly needed to become the focus of industrial/organizational psychologists, with a major catalyst of this change being legal challenges. Among the changes that are being seen today are the movement from somewhat informal practice analyses to more full-blown job analyses; from a starting point of identifying needed knowledge, skills and abilities to a starting point of detecting critical job tasks; from discussions with knowledgeable experts in the field to acquiring more representative survey samples; the application of Title VII concerns in the United States; the need to identify what is needed on the first day of work; the notion of competencies; and increasing test security concerns. In addition, there is increasing internationalization in licensure and certification testing.

Meierhenry Distinguished University Professor and Director,
Buros Center for Testing
21 Teachers College Hall
The University of Nebraska-Lincoln


Hambleton

Ronald Hambleton
USA

April_Zenisky

April Zenisky
USA

ADVANCES IN SCORE REPORTING: FOR THE SECOND TIME

Sunday, July 3 | 2:00 – 3:00 pm
Pinnacle Ballroom, 3F

Ronald K. Hambleton and April Zenisky

Eight years ago, the topic of score reporting was addressed by the two authors at the ITC Conference. Today it remains, we believe, the most important topic being studied by measurement specialists. If policy-makers, educators, psychologists and the public do not understand the test results they are given and cannot determine how to use the information, then the value of all of the outstanding test theory, including item response theory, and measurement advances that has been made and applied to score reporting and all of the resources including time and money that have been used, are of very limited or no value at all. In this presentation, the many innovations in score reporting that have been developed over the past eight years will be described, and examples of both successful as well as unsuccessful displays of data will be shared. Necessary next steps too for advancing the topic will be presented.

Ronald K. Hambleton, Professor
University of Massachusetts, USA
Department of Educational Policy, Research & Administration
College of Education, University of Massachusetts Amherst


Dirk_Hastedt2


Dirk Hastedt
IEA Executive Director

LEARNING COMPUTER SKILLS IN SCHOOL

Saturday, July 2 | 2:00 – 3:00 pm
Pinnacle Ballroom, 3F

Information and communications technology (ICT) literacy skills are an increasingly essential part of daily life, as worldwide use of modern technologies, such as email, smart phones or electronic banking, have become not only acceptable, but customary. In schools too, a good grasp of modern technologies is increasingly expected from students; for example teachers may request students to research information on the internet, or prepare a presentation using word processing or slide software. However, are students adequately prepared to use these technologies in school, as well as in later life?

In 2013, twenty-one countries participated in the International Computer and Information Literacy Study (ICILS) conducted by the International Association for the Evaluation of Educational Achievement (IEA). A computer-based survey assessed student ICT competencies, and interpretation was further supported by questionnaires gathering relevant background information from students, teachers, and schools.

The study revealed some interesting results, and this presentation will demystify some common prejudices.

Dirk Hastedt is the Executive Director of the International Association for the Evaluation of Educational Achievement (IEA). He was previously Co-Director of the IEA Data Processing and Research Center (IEA DPC) based in Hamburg, Germany, since 2001, where he was responsible for the center’s international field of work.
Mr. Hastedt joined the IEA Germany foundation in 1989 to work for the IEA Reading Literacy Study and TIMSS 1995, and from 1997 held the position of a Senior Researcher at the IEA DPC. From 2003 to 2005 he was project manager for TIMSS 1999. Since 2001, he had been responsible for the data processing for, amongst other studies, TIMSS 2003 and 2007, and PIRLS 2006. Mr. Hastedt is also co-editor in chief of the IEA-ETS-Research Institute’s journal ‘Large –scale Assessments in Education’.


andrew-ho-86250


Andrew Dean Ho
USA

THE MISMEASUREMENT OF MOOCS: LESSONS FROM MASSIVE OPEN ONLINE COURSES ABOUT METRICS FOR ONLINE LEARNING

Saturday, July 2 | 9:30 – 10:30 am
Pinnacle Ballroom, 3F

Look past the initial hype of Massive Open Online Courses (MOOCs) as well as the frothing backlash, and you'll see dual failures of measurement. On the one hand, massive numbers of registrants were never "learners." On the other, completion rates masked massive completion that dwarfed that of conventional courses. I report findings from a report reviewing hundreds of MOOCs administered over three years at Harvard University and the Massachusetts Institute of Technology (MIT). I emphasize guidelines for MOOC description and evaluation, as well as general guidelines for developing metrics for learning in online environments characterized by considerable heterogeneity and asychronicity.

Andrew Ho is Professor of Education at the Harvard Graduate School of Education. He is a psychometrician interested in the properties and consequences of test-based educational accountability metrics. His research has addressed measures of proficiency, growth, value added, achievement gains, achievement gap closure, college readiness, and course completion. He is a member of the National Assessment Governing Board and chair of the HarvardX Research Committee.

His current projects include developing robust achievement gap measures, improving standards for college readiness, and advancing research in Massive Open Online Courses. Dr. Ho has been a postdoctoral fellow at the National Academy of Education and Spencer Foundation and a recipient of the Jason Millman Promising Measurement Scholar Award from the National Council on Measurement in Education. He received his Ph.D. in Educational Psychology and his M.S. in Statistics from Stanford University.



Deniz S. Ones
USA

MEASUREMENT AND NOMOLOGICAL NETWORK OF ADAPTIVE AND MALADAPTIVE PERSONALITY IN EMPLOYMENT SETTINGS

Sunday, July 3 | 9:30 – 10:30 am
Pinnacle Ballroom, 3F

Over the past two decades, knowledge around workplace consequences of personality traits, particularly the Big Five, has grown tremendously. Personality correlates of myriad workplace behaviors and attitudes have been examined by industrial-organizational (I-O) psychologists. My presentation will focus on how structure and spectrum of personality measurements can be utilized for workplace applications such as employee selection and development. Personality constructs range between maladaptive positive and negative extremes, with the middle normal range representing typical (i.e., “normal” or adaptive) traits. Both adaptive and maladaptive personality construct space is characterized by hierarchy (including general factor of personality, meta-traits, Big Five factors, Big Five Aspects, and personality facets), lack of simple structure (resulting in compound traits indicating more than one personality domain), and bipolarity. Implications for work-oriented assessments and predictive validity will be discussed.

Deniz S. Ones is the Hellervik Professor of Industrial Psychology and a Distinguished McKnight Professor at the University of Minnesota, where she also directs the Industrial-Organizational Psychology program. She received her Ph.D. from the University of Iowa in 1993 under the mentorship of Frank Schmidt. Her research, published in more than 175 articles and book chapters, focuses on staffing, employee selection and measurement of personality, integrity, and cognitive ability and has been cited over 12,500 times in the scientific literature (h-index = 53). She has received numerous awards for her work in these areas; among these the 1994 Wallace best dissertation and the 1998 McCormick early career distinguished scientific contributions awards from the Society for Industrial and Organizational Psychology, the 2003 Cattell early career award from the Society for Multivariate Experimental Psychology, and the 2012 Lifetime Professional Career Contributions and Service to Testing Award from the Association for Test Publishers. She is a Fellow of Divisions 5 (Evaluation, Measurement, and Statistics) and 14 (SIOP) of the American Psychological Association as well as a Fellow of the Association for Psychological Science. She has served as co-editor in chief of the International Journal of Selection and Assessment (2001-2006) as well as on editorial boards of multiple prominent scientific journals. She has edited several special issues of journals on cognitive ability tests, counterproductive work behaviors, and personality measurement for work applications. Dr. Ones has also served as Chair of APA's Committee on Psychological Tests and Assessments. In her applied work, she focuses on helping organizations design and implement valid and fair staffing and selection systems.

Interest: Counter productive work behaviours, personality measurement in I/O psychology, personnel selection and classification, personality assessment and research, meta analysis, cross cultural research)


Maria


Maria Araceli Ruiz-Primo
USA

FORMATIVE ASSESSMENT: GOOD INTENTIONS, POOR IMPLEMENTATION 

Sunday, July 3 | 8:30 – 9:30 am
Pinnacle Ballroom, 3F

Assessments and data on student achievement have flooded the K-12 education landscape in the last years. A case in point is the tremendous increase in the use of interim assessments. Testing companies have used a great deal of their resources to develop interim assessments and states and districts have expended millions of dollars buying these instruments under the premise that they can inform instructional practices and improve student learning. This trend in the use of interim assessments may be a reflection of the fact that the nature and characteristics of formative assessment are yet to be properly understood. In addition, little is known on whether and how teachers, school administrators, or school districts actually use data from interim assessments. This presentation examines issues that are critical to ensuring that the information provided by formative assessment systems is translated into actionable knowledge that can be used to improve teaching and learning. The presentation discusses three issues: (1) the lack of understanding of what formative assessment is and how it works on the everyday basis; (2) the lack of understanding of the types of tasks that are most useful for formative assessment purposes; and (3) the importance of providing assessment data to teachers in ways they can interpret and use effectively.

Maria Araceli Ruiz-Primo (Classroom Assessment) (Assessment and Learning in the Classroom)(US)
Director, Research Center
Director, Laboratory for Educational Assessment, Research, and Innovation (LEARN) Center
Associate Professor, Educational Psychology / Research Methods
School of Education and Human Development
University of Colorado Denver
Interest: Development of technically sound assessments; Development of a methodology for instructionally sensitive assessments; Student-engaged formative assessment; Inquiry-based science instruction