• Home
  • Pointwise Mutual Information
    • List of Articles Pointwise Mutual Information

      • Open Access Article

        1 - Joint Learning Approach with Attention-based Model for Semantic Textual Similarity
        Ebrahim Ganjalipour Amir Hossein Refahi Sheikhani Sohrab Kordrostami Ali asghar Hosseinzadeh
        Introduction: Semantic Textual Similarity (STS) across languages is a pivotal challenge in natural language processing, with applications ranging from plagiarism detection to machine translation. Despite significant strides in STS, it remains a formidable task in langua More
        Introduction: Semantic Textual Similarity (STS) across languages is a pivotal challenge in natural language processing, with applications ranging from plagiarism detection to machine translation. Despite significant strides in STS, it remains a formidable task in languages with distinct syntactic structures and limited digital resources. Linguistic diversity, especially in word order variation, poses unique challenges, exemplified by languages adhering to Subject-Object-Verb (SOV) or Subject-Verb-Object (SVO) patterns, compounded by complexities like pronoun-dropping. This paper addresses the intricate task of measuring STS in Persian, characterized by SOV word order and distinctive linguistic features. Method: We propose a novel joint learning approach, harnessing an enhanced self-attention model, to tackle the STS challenge in both SOV and SVO language structures. Our methodology involves establishing a comprehensive multilingual corpus with parallel data for SOV and SVO languages, ensuring a diverse representation of linguistic structures. An improved self-attention model is introduced, featuring weighted relative positional encoding and enriched context representations infused with co-occurrence information through pointwise mutual information (PMI) factors. A joint learning framework leverages shared representations across languages, facilitating effective knowledge transfer and bridging the linguistic gap between SOV and SVO languages. Results: Our model, trained on Persian-English and Persian-Persian language pairs simultaneously, successfully extracts informative features, explicitly considering differences in word order and pronoun-dropping. During the training, the batch is sampled from STS benchmark with English and Translated Persian Pair texts and fed into customized encoder to get attention matrix and output embeddings. Then, the similarity module predicts the STS score. We use the STS score to compute the Mean Square Error (MSE) loss. Evaluation on Persian-English and Persian-Persian STS-Benchmarks demonstrates impressive performance, achieving Pearson correlation coefficients of 89.51% and 92.47%, respectively. Comparative experiments reveal superior performance against existing models, emphasizing the effectiveness of our proposed approach. Discussion: The ablation study further substantiates the robustness of our system, showcasing faster convergence and reduced susceptibility to overfitting. The results underscore the significance of our enhanced model in addressing the complexities of measuring semantic similarity in languages with diverse linguistic structures and limited digital resources. The approach not only advances cross-lingual STS capabilities but also provides insights into handling syntactic variations, such as SOV and SVO word orders, and pronoun-dropping. This research opens avenues for future investigations into enhancing STS in languages with unique structural characteristics. Manuscript profile
      • Open Access Article

        2 - Enhanced Self-Attention Model for Cross-Lingual Semantic Textual Similarity in SOV and SVO Languages: Persian and English Case Study
        Ebrahim Ganjalipour Amir Hossein Refahi Sheikhani Sohrab Kordrostami Ali Asghar Hosseinzadeh