Langenbucher, J., Labouvie, E., Morgenstern, J. (1996). Methodological evolution: measurement of the diagnostic agreement. Journal of Consulting and Clinical Psychology, 64, 1285-1289. Fiss, J. L. (1971). Measure of the scale rated correspondence between many advisors. Psychological Bulletin, 76, 378-382.

Average by duration: 1) Calculate duration per occurrence for each response, 2) add individual IOA percentages, 3) divide the sum of each IOAs by total duration, 4) multiply them by 100 (around the entire number) Behaviouralists have developed a sophisticated methodology for assessing behavioral changes that depend on a precise measurement of behavior. Direct observation of behaviour is traditionally one of the carriers of behavioural measurement. Therefore, researchers need to address psychometric properties, such as the .B the Interobserver Agreement, of observational measures to ensure a reliable and valid measurement. Of the many indices of the Interobserver agreement, the percentage of the agreement is the most popular. Its use persists despite repeated reminders and empirical evidence that suggests that it is not the most psychometric statistic that determines interobserver agreement because of its inability to take into account chance. Cohens Kappa (1960) has long been proposed as a more psychometric statistic for the evaluation of the Interobserver agreement. Kappa is described and calculation methods are presented. Average number per interval: 1) Divide time into intervals, 2) Observers record frequency of behavior by interval, 3) calculate agreement by interval (similar to total number), 4) Add IOA interval, 5) Divide by n intervals (calculate average) Percentage of number pro exact: percentage of intervals in which two observers record the same number of procedures A to improve the credibility of the data , which involves comparing independent observations of two or more people of the same events. The IOA is calculated by calculating the number of agreements between independent observers and divided by the total number of agreements plus disagreements. The coefficient is then multiplied by 100 to calculate the percentage (%) Consent. Trial-by-trial: compares the agreement between the different individual procedures, instead of accounting for the whole Berk, R. A.

(1979). Generalization of behavioural observations: a clarification of the Interobserver agreement and the reliability of the inter-observer. American Journal of Mental Deficiency, 83, 460-472. Hartmann, D. P. (1977, Spring). Reflections in the choice of the reliability estimates of inter-observers. Journal of Applied Behavior Analysis, 10, 103-116. J. Cohen: Cohen. A coefficient of agreement for nominal scales. Educational and psychological measure, 20, 37-46.

Shrout, P.E., Spitzer, R. L., Fleiss, J. L. (1987). Comment: Quantification of compliance in the resumed psychiatric diagnosis. Archives of General Psychiatry, 44, 172-178. Landis, J. R., Koch, G. G.

(1977). The measure of the compliance agreement for categorical data. Biometrics, 33, 159-174. Suen, H. K., Lee, P.S. (1985). Impact of the use of a percentage agreement on behavioural observation: a reassessment. Journal of Psychopathology and Behavioral Assessment, 7, 221-234. . Instant online access to all topics from 2019. The subscription is automatically renewed each year. Reliable data is data that gives the same results to each measurement.