Saifullah Razali, University of Hertfodshire, Singapore
Figurative language: metaphor, simile, idiom, hyperbole, sarcasm, and irony encodes meaning that often departs from literal interpretation and is crucial across literature, social media, and educational content. Detecting such language remains challenging for NLP systems because it requires pragmatic, cultural and world knowledge. This paper presents a thorough study of figurative language detection using pretrained language models (PLMs). We review linguistic foundations, describe architectures and training strategies using PLMs (BERT, RoBERTa, and GPT-style models), present an experimental framework, and report results drawing on recent benchmark datasets and shared tasks.
Figurative Language, Metaphor Detection, Sarcasm Detection, Pretrained Language Models, BERT, RoBERTa, GPT
Jonathan Harrison, Raiff’s Bits LLC, Bridge City, Texas, USA
Modern AI systems achieve remarkable generative performance but lack stable ethical alignment, modular multi-perspective cognition, and explainable rea- soning architectures. This paper presents Codette, a sovereign cognitive AI frame- work that addresses these challenges through three integrated contributions: (1) the RC+ξ (Recursive Convergence + Epistemic Tension) formalism, which models cog- nitive state evolution as a constrained dynamical system converging toward stable attractors; (2) a multi-agent Reasoning Forge that synchronizes heterogeneous cog- nitive agents through shared attractor dynamics—a form of consensus dynamics in distributed cognition; and (3) the AEGIS ethical governance system, which functions as a reinforcement-aligned ethical regulator with recursive anchor feedback. The frame- work is implemented as a six-layer modular architecture integrating eleven cognitive perspectives, a five-dimensional QuantumSpiderweb cognitive graph, persistent mem- ory cocoons, and a parameter-efficient adapter training pipeline using LoRA/PEFT on consumer-grade hardware. Experimental benchmarks demonstrate 82.6% ethical align- ment (AEGIS constraint satisfaction), multi-agent phase coherence Γ = 0.99 within 10 recursive iterations across 11 agents, 71.3% epistemic tension decay confirming at- tractor convergence, and robust cocoon stability (0.969 phase stability, 0.994 coher- ence across 20 cocoons). The system’s dynamical properties—oscillatory intent signals, monotonically decreasing epistemic tension, and bounded anomaly rejection—are val- idated through deep-simulation diagnostics, situating Codette within the intersection of dynamical systems theory, distributed cognition, and neuro-symbolic AI.
Cognitive Architecture, Multi-Agent Systems, Ethical AI, Dynamical Sys- tems, Recursive Convergence, LoRA, Consensus Dynamics, Explainable AI, Quantum- Inspired Computing
Zinia Rahman1, Wang Zheng1, Refat Khan Pathan2 1School of Automation, Department of Control Science and Engineering, Southeast University Nanjing, China 2School of Computing and Artificial Intelligence, Faculty of Engineering and Technology Sunway University, Malaysia
Automatic interpretation of poetry presents significant challenges for natural language processing due to figurative language, cultural symbolism, and subtle emotional cues. This study proposes a comparative computational framework for extracting themes and emotions from English and Bangla poems using TF-IDF features and multiple supervised algorithms. Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Decision Trees (DT), Random Forests (RF) - and a Convolutional Neural Network (CNN) were evaluated for both thematic and emotional categorization. For English poetry, ensemble and margin-based models achieved the highest performance, with SVM and Random Forest attaining up to 88.7% accuracy for emotion and 85.5% for theme classification. In Bangla poetry, emotion classification reached perfect accuracy across all models, while theme classification remained highly discriminative, with Random Forest achieving 94% accuracy. The study demonstrates the effectiveness of traditional machine learning approaches for bilingual poetic analysis in low-resource literary domains.
Poetry Analysis, Emotion and theme classification, Deep Learning, CNN, ML
Alexander Chang1 David Gari2 1Troy High School, 2200 Dorothy Ln, Fullerton, CA 92831 2David T. Garcia, University of California, Los Angeles, CA 90095 2School of Computing and Artificial Intelligence, Faculty of Engineering and Technology Sunway University, Malaysia
Large language models are increasingly integrated into educational and professional environments; however, effective interaction with these systems requires prompt engineering skills that most users have not formally developed. This paper presents an intelligent mobile application designed to teach prompt engineering through structured instruction, iterative practice, and AI-powered real-time feedback. The system integrates user authentication, a scaffolded educational framework, and live interaction with large language models through OpenAI. Key challenges addressed include feedback consistency, learner engagement, and operational cost management. Experimental evaluations examined the reliability of AI-generated feedback and the sustainability of API usage under simulated user loads. Results demonstrated strong alignment and correlation between AI evaluations and expert assessments, as well as significant efficiency gains through response caching. By combining meta-prompting, adaptive learning design, and mobile accessibility, the proposed application enables continuous skill development and offers a scalable solution for improving effective communication with large language models.
Prompt Engineering, Mobile Application, AI, Real-time Feedback
Graeme Heald , Australia
The paper addresses the issue of hallucinations in Large Language Models (LLMs), which arise from the limitations of classical binary logic that forces a True/False output, leading to stochastic guessing when data is missing or contradictory. It introduces U4 Logic, a four-valued non-classical framework with True (T), False (F), Uncertain (U), and Null ( ) states. U4 Logic incorporates Uncertainty (U) as a valid truth value and Null ( ) as a non-designated state to absorb contradictions and prevent their propagation. By dismantling the Principle of Explosion, U4 ensures contradictions collapse into a non-actionable Null state, blocking hallucinatory cascades. Strict-vacuous mapping rules, such as U → T = F and F → T = F, prevent the AI from deriving confident conclusions from uncertainty or falsehoods. U4 transforms LLMs into Logic-Gated Reasoners, offering a robust framework for trustworthy AI in high-stakes environments by prioritizing inference integrity over probabilistic guessing.
Hallucinations, LLM, Softmax, U4 logic
Felipe Montoya Rodriguez Founder, MONTIO INC, Bridgeport, CT, USA
Coffee supply chains remain vulnerable to unverifiable origin claims, fragmented operational records, and opaque payment flows. This manuscript proposes a deployable traceability architecture that separates verifiability from disclosure. Supply-chain events (origin, custody transfer, inspection, and condition monitoring) are issued as signed W3C verifiable credentials (VCs), while only cryptographic commitments and event linkages are anchored on-chain for immutable ordering and timestamps. Sensitive business payloads remain encrypted off-chain under policy control; auditors, lenders, and insurers verify via selective disclosure and evidence packs. We extend the design to hybrid deployments spanning permissioned ledgers and public anchoring chains using periodic Merkle-root checkpoints
blockchain; supply-chain traceability; IoT attestations; verifiable credentials; privacy; auditability; cross-chain