Bayram, Hatice MerveOzturkcan, Arda2024-09-112024-09-1120240007-070X1758-4108https://doi.org/10.1108/BFJ-02-2024-0158https://hdl.handle.net/11363/7844Purpose - This study aims to assess the effectiveness of different AI models in accurately aggregating information about the protein quality (PQ) content of food items using four artificial intelligence (AI) models - ChatGPT 3.5, ChatGPT 4, Bard AI and Bing Chat. Design/methodology/approach - A total of 22 food items, curated from the Food and Agriculture Organisation (FAO) of the United Nations (UN) report, were input into each model. These items were characterised by their PQ content according to the Digestible Indispensable Amino Acid Score (DIAAS). Findings - Bing Chat was the most accurate AI assistant with a mean accuracy rate of 63.6% for all analyses, followed by ChatGPT 4 with 60.6%. ChatGPT 4 (Cohen's kappa: 0.718, p < 0.001) and ChatGPT 3.5 (Cohen's kappa: 0.636, p: 0.002) showed substantial agreement between baseline and 2nd analysis, whereas they showed a moderate agreement between baseline and 3rd analysis (Cohen's kappa: 0.538, p: 0.011 for ChatGPT 4 and Cohen's kappa: 0.455, p: 0.030 for ChatGPT 3.5). Originality/value - This study provides an initial insight into how emerging AI models assess and classify nutrient content pertinent to nutritional knowledge. Further research into the real-world implementation of AI for nutritional advice is essential as the technology develops.eninfo:eu-repo/semantics/closedAccessSustainable dietSustainabilityArtificial intelligenceAl modelsFood assessmentChatGPTBard AIBing chatAI showdown: info accuracy on protein quality content in foods from ChatGPT 3.5, ChatGPT 4, bard AI and bing chatArticle12693335334610.1108/BFJ-02-2024-01582-s2.0-85197474303WOS:001263254600001N/A