1. Introduction
The Open Access movement began at the start of the 21st century and emerged with the Budapest Open Access Initiative (BOAI) in February 2002. This initiative pioneered Open Access as the unrestricted availability of scientific knowledge, ushering in a new era in disseminating academic output (BOAI, 2002; BOAI, 2012; Suber, 2012). Subsequently, the Bethesda Statement on Open Access Publishing, released in June 2003, and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, presented on October 22, 2003, broadened the foundations established by the BOAI. These declarations positioned Open Access as a strategic goal for global academic and scientific institutions, consolidating the pillars of the BBB movement (Budapest, Bethesda, and Berlin) (Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, 2003; Max-Planck-Gesellschaft, 2003; Willinsky, 2006). With a focus on practical strategies and commitments, the BBB movement provided a structural basis for the contemporary Open Access model, highlighting its potential to transform the production and accessibility of scientific knowledge, thereby promoting significant advances in global academic sharing (Laakso & Björk, 2012; Suber, 2012).
In the scientific communication system, scholarly journals are the principal formal means of disseminating knowledge and developments in highly complex fields, such as Engineering. Among these journals, Open Access titles stand out for broadening research reach, promoting greater visibility and universal access (Harnad, 2001; Suber, 2012). However, the publishing models under this format vary significantly, notably between journals that charge Article Processing Charges (APC)—as in the Gold Open Access and Green Open Access models—and Diamond Open Access, which removes financial barriers for both authors and readers but faces sustainability challenges (Druelinger & Ma, 2023; Swan, 2015; Yoon et al., 2024).
Open access has also been shown to have a significant impact on the visibility and accessibility of scientific outputs, as evidenced by its transformative role in Chile, where it has enhanced doctoral education and improved the international visibility of local research (Segovia et al., 2024). This impact is also observed in European universities belonging to the YERUN network, where institutional open access initiatives have been associated with higher citation rates, especially for publications under the Green Open Access model (De Filippo & Mañana-Rodríguez, 2020).
The citation advantage often attributed to Open Access journals is a widely discussed but still controversial topic. Systematic studies indicate that this advantage may vary depending on disciplinary and methodological factors but is generally observed across multiple scientific areas (Langham-Putrow et al., 2021; Huang et al., 2024). Nevertheless, this advantage must be analyzed considering the associated costs. The widely adopted APC model has been criticized for exacerbating structural inequalities, as many researchers—particularly those from less advantaged institutions—cannot afford the publication fees (May, 2020; Borrego, 2023; Haug, 2019). On the other hand, the Diamond Open Access model faces financial challenges related to its reliance on public and institutional subsidies, which may compromise its long-term sustainability (Borrego, 2023; Frank et al., 2023).
Although both APC and Diamond models promote open access, their approaches reflect fundamental differences. The APC model is widely adopted by commercial publishers, who tie costs directly to authors, allowing for greater financial independence but often resulting in inequalities for researchers with limited resources. Conversely, the Diamond model removes economic barriers for authors and readers, fostering equity, but depends on regional and field institutional and public subsidies that vary significantly. This disparity reveals a delicate balance between financial sustainability and the democratization of scientific knowledge, especially in highly productive areas like Engineering (Andringa et al., 2024; Borrego, 2023).
To address these limitations and strengthen the relevance of the Diamond Open Access model, the Diamond OA Standard (DOAS) sets forth clear quality guidelines, covering everything from funding and governance to visibility and impact. Developed based on the Action Plan for Diamond Open Access and revised by the DIAMAS project, this standard prioritizes technical efficiency, equity, and inclusion as central elements for the sustainability of these journals (Ancion et al., 2022; Rooryck et al., 2024). Its seven fundamental components include: (1) funding; (2) legal ownership, mission, and governance; (3) open science; (4) editorial management, quality, and research integrity; (5) technical service efficiency; (6) visibility, communication, marketing, and impact; and (7) equity, diversity, inclusion, and belonging (EDIB), multilingualism, and gender equity. These steps provide a comprehensive framework for strengthening the relevance and competitiveness of the Diamond Open Access model on the global stage.
The present study comparatively analyzes Open Access journals that charge APCs and Diamond Open Access journals in Engineering, using consolidated metrics from the Scopus database. These metrics include CiteScore, the number of accumulated citations, the volume of published articles, and the percentage of cited articles, organized by quartiles (Q1–Q4) and the top 10% group. This analysis offers a detailed view of the differences in impact and performance between these models, contributing to the debate on editorial policies and knowledge dissemination strategies. Although the focus is on Engineering, the methods and findings presented here can be extended and contextualized to other fields, providing a starting point for studies investigating similar patterns across various disciplines.
The findings reveal that journals using the APC model generally exhibit higher CiteScore metrics and citation averages, particularly in the top quartiles. In contrast, Diamond Open Access journals consistently perform and excel in the proportion of articles cited among the top 10%. These results underline the complementary roles of both models in promoting access and impact in engineering research.
2. Method
This quantitative, exploratory study can be classified as documentary in nature (Neuendorf, 2017), and it analyzed data extracted from the Scopus database during the last week of November 2024. The objective was to compare the performance of Open Access journals that charge APCs with Diamond Open Access (fee-free) journals in Engineering, categorized by quartiles (Q1–Q4) and by the top 10% group, based on 2023 metrics. Four main variables were evaluated: CiteScore, number of citations (2020–2023), articles published (2020–2023), and the percentage of cited articles (% Cited).
The 2020–2023 time frame was chosen to capture recent trends in journal performance, considering that shorter periods might not reflect structural variations, while more extended periods could include outdated metrics relative to current editorial practices. This interval also allows for identifying the potential impacts of the COVID-19 pandemic on publishing and citation behavior, especially in fields such as Engineering.
The Engineering area in Scopus encompasses 18 sub-areas, including Aerospace Engineering, Architecture, Automotive Engineering, Biomedical Engineering, Civil and Structural Engineering, Electrical and Electronic Engineering, and Mechanical Engineering. Of the 5,162 indexed journals in the area, 803 (15.56%) were classified as Open Access. After applying quartile filters, 757 titles out of the 3,012 active journals were selected. By using the “top 10%” filter, 126 journals were identified.
The journals were divided into two groups for comparative analysis: G1, consisting of 504 (66.58%) Open Access journals that charge APCs, and G2, consisting of 253 (33.42%) Diamond Open Access journals. Journals classified as “N/A” by Scopus (n=46), which did not have a quartile assigned, were excluded from the analysis.
Statistical analyses were conducted using Statdisk software (version 13.0), which provides specialized tools for non-parametric statistical tests essential for this study. The choice of Statdisk was based on its simplicity, accessibility, and efficiency in performing robust exploratory analyses and its compatibility with extensive and heterogeneous datasets. These features ensured greater agility and accuracy in obtaining results.
Exploratory analyses offered a detailed view of the distributions and variability of the variables, aiding in understanding the performance differences between groups G1 and G2. Descriptive tables presented confidence intervals and dispersion measures, and the analyses were organized by sub-area to ensure representativeness and data consistency. This procedure ensured greater comparison robustness, allowing for evaluating how each access model influences performance in specific metrics.
2.1. Study Design
The research adopted a quantitative, descriptive, and comparative design. The statistical analyses were conducted with the following objectives:
Assess the normality of the data for each variable and quartile.
Compare G1 and G2 concerning the selected metrics.
Explore the relationships between CiteScore and accumulated citations in both groups.
2.2. Statistical Procedures
2.2.1. Normality TEST
The data were subjected to the Ryan-Joiner test to verify adherence to the normal distribution. The results indicated that most variables were non-normally distributed, except for a few specific subgroups. Outliers were identified in several variables, particularly in Citations and Articles Published.
2.2.2. Tests for Comparing G1 and G2
Given the prevalence of non-normality, non-parametric tests were performed to compare the two groups:
- -
Wilcoxon Rank-Sum Test for Independent Samples: Used to assess differences in the distributions of variables between G1 and G2 within each quartile. Comparisons were conducted separately for CiteScore, Citations, Articles Published, and % Cited
2.2.3. Correlation Tests
Correlation analyses were carried out to investigate the relationship between CiteScore and accumulated citations in each group and quartile:
- -
Pearson or Spearman Correlation Coefficient: Selected according to data normality. The analyses were accompanied by significance tests (p-value) to identify statistically significant correlations.
2.2.4. Homogeneity Analysis
Homogeneity tests were applied before the comparisons to evaluate the equality of variances between groups. This ensured the appropriate choice of statistical methods used.
2.3. Tools and Visualization
Analyses were conducted using Statdisk software (version 13.0) and Excel spreadsheets for initial data organization and visualization. Scatter plots and box plots were generated to facilitate the visual interpretation of results and to identify patterns or anomalies.
2.4. Rationale for Non-Parametric Tests
Non-parametric tests were selected because the distributions exhibited non-normality, skewness, and outliers in several variables. This approach provided greater robustness for group comparisons.
4. Discussion
This study compared the performance of Open Access journals that charge APCs (G1) with Diamond Open Access journals (G2) in Engineering, using four main variables: CiteScore, number of accumulated citations, published articles, and percentage of cited articles. The results were organized by quartiles (Q1–Q4) and the top 10% group, based on data from the Scopus database (
Table 1). This is the first study to comprehensively analyze the differences in performance between these two publishing models in the context of Engineering.
Before examining the selected variables, the APC fees charged by Open Access journals were analyzed, as these costs make it possible to identify significant average fee differences among quartiles, highlighting disparities in pricing practices.
The APC fees charged by Open Access journals in Engineering, categorized by quartiles (Q1–Q4) and the top 10% group, revealed significant disparities. Among the 504 journals that charge APCs (66.58%), the average values were higher in the upper quartiles, such as in the top 10% (USD 2,399.79), with a median of USD 2,200.00 and a range of USD 615.00 to USD 6,730.00. In Q1, the average was slightly lower (USD 2,151.99); in Q2, the amounts decreased significantly, reaching an average of USD 1,665.16 (
Table 1;
Table A4). In the lower quartiles, Q3 and Q4, fees sharply declined, with averages of USD 848.35 and USD 603.04, respectively. The median in Q4 was only USD 300.00, reflecting the greater affordability of journals in this quartile. The wide variability in APC fees was demonstrated by high coefficients of variation, especially in Q4 (134.40%), indicating significant heterogeneity in charging practices. These disparities in pricing practices may directly influence journal performance on metrics such as number of citations and percentage of cited articles, reflecting structural differences between publishing models (
Table 1;
Table A4).
Regarding CiteScore, G1 journals demonstrated superior performance in the intermediate quartiles (Q2 and Q3), while in Q1 and Q4, the differences between the models were less pronounced. In Q2, G1 achieved a mean of 3.84 and a median of 3.90, surpassing G2, which had a mean of 2.79 and 3.00 (z = 5.03, p < 0.05). In Q3, G1’s mean value of 1.96 was significantly higher than G2’s mean of 1.50 (z = 3.54, p < 0.05). On the other hand, in Q1, which includes the highest-impact journals, G1 had a mean of 9.10 and a median of 7.75, while G2 recorded a mean of 8.17 and a median of 7.10; this difference was not statistically significant (z = 1.25, p > 0.05). In Q4, the mean values were similar, with G1 at 0.67 and G2 at 0.59 (z = 0.75, p > 0.05). These results suggest that G1 holds an advantage in the intermediate ranges, while performance is more balanced at the extremes (Q1 and Q4) (Tables 2; A5).
Data dispersion revealed essential differences between the models. In Q1, both G1 and G2 showed high variability, with standard deviations of 5.15 and 5.32, respectively. In Q2, G1’s coefficient of variation (31.70%) was lower than G2’s (43.85%), indicating greater consistency among APC-charging journals in this quartile. In the lower quartiles (Q3 and Q4), G2 displayed greater homogeneity, with lower standard deviations (0.77 and 0.35) compared to those of G1 (0.73 and 0.49). These findings point to a more stable performance by G2 in the lower quartiles, while G1 stands out in absolute metrics in the intermediate ranges (Tables 2; A5).
G1 journals showed superior performance in the number of accumulated citations across all quartiles, particularly in the upper quartiles (Q1 and Q2), where the differences compared to G2 were more pronounced (
Table 2). In Q1, G1 presented a mean of 11,389 citations and a median of 2,090, while G2 recorded a mean of 1,577 and a median of 1,154 (z = 3.93, p < 0.05) (Tables 3; A6). In Q2, G1 reached a mean of 2,389 citations, significantly higher than G2’s mean of 462 (z = 5.72, p < 0.05) (Tables 3; A6). The differences were minor in the lower quartiles, but still favored G1. In Q3, G1’s mean was 481 citations versus G2’s 340 (z = 2.57, p < 0.05) (Tables 3; A6). In Q4, both models presented similar values, with G1 averaging 111 citations and G2 83 (z = 0.32, p > 0.05) (Tables 3; A6).
In addition to the means, the dispersion analysis revealed high variability in G1’s accumulated citations, especially in Q1, with a standard deviation of 43,885 and a coefficient of variation of 385.33% (
Table 3). In contrast, G2 exhibited greater consistency in the same quartile, with a standard deviation of 1,573 and a coefficient of variation of 99.81% (
Table 3). This heterogeneity in G1 reflects the presence of journals with extremely high citation counts, such as IEEE Access (484,743 citations) and Advanced Science (95,266 citations), which raise the overall average (
Table 2). In Q2, G1 also showed more significant variability, with a standard deviation of 6,962, while G2 maintained a standard deviation of 537, indicating greater homogeneity (
Table 3).
When examining the raw number of citations, G1 concentrated 79.86% of the total citations in Q1 (2,414,482 citations), while G2 accumulated only 82,002 (
Table 3). This difference reflects APC-charging journals' significantly more significant academic impact in the most prestigious segments. In the total accumulated citations, G1 reached 2,880,490, far surpassing G2, which totaled 143,084 (
Table 3). This phenomenon is not limited to Engineering and has been identified in broader studies, such as Miranda and Garcia-Carpintero (2019), who found that Q1 publications account for an average of 65% of total citations in a research area, with variations reaching up to 98% depending on the discipline. Additionally, Schvirck et al. (2024) highlight the invisibility of articles published in lower-impact journals, reinforcing that academic productivity pressures often perpetuate a cycle in which citations are concentrated in higher-prestige publications. These results indicate that G1 has a clear advantage in absolute citation metrics, particularly in the upper quartiles. However, G2’s greater homogeneity in the lower quartiles suggests a more stable and accessible approach in terms of scientific impact.
The correlation between CiteScore and accumulated citations revealed consistent patterns and significant differences between the G1 and G2 models, highlighting the interactions between perceived quality and academic impact. In Q1, the correlation was positive and significant for both models, with G1 showing a correlation coefficient of r=0.78 (p < 0.05). At the same time, G2 recorded r=0.63 (p < 0.05), indicating a stronger association in G1 between quality and impact metrics (
Table A6). In Q2, this relationship was even more pronounced for G1 (r=0.84, p < 0.05), reflecting greater alignment between the two variables, whereas G2 maintained a positive but more moderate correlation (r=0.57, p < 0.05). In the lower quartiles (Q3 and Q4), correlations were weaker for both models, but G2 demonstrated more excellent stability, with coefficients of r=0.41 (p < 0.05) in Q3 and r=0.35 (p < 0.05) in Q4, compared to r=0.29 (p < 0.05) and r=0.21 (p > 0.05) for G1, respectively (
Table A6).
These results indicate that G1 exhibits a stronger relationship between perceived quality and academic impact in the upper quartiles. At the same time, G2 stands out for a more consistent and uniform correlation in the lower quartiles. Furthermore, the variability in the correlation coefficients reflects G1’s heterogeneity in the lower quartiles, where values fluctuate more sharply, whereas G2 presents a more linear trajectory. These findings suggest that Diamond Open Access journals (G2), although less competitive in absolute metrics, can maintain a balanced relationship between CiteScore and accumulated citations, especially in less prestigious contexts.
G1 journals, on average, published more articles in all quartiles, particularly in Q1 and Q2, while G2 displayed greater consistency in the reported values. In Q1, G1 reached an average of 1,441 published articles, with a median of 1,223, while G2 recorded an average of 577 and a median of 510 (z = 4.27, p < 0.05). In Q2, G1 averaged 554, surpassing G2, which had an average of 213 (z = 3.98, p < 0.05). The differences in the lower quartiles decreased, but G1 maintained the lead: in Q3, G1 had an average of 212 articles versus G2’s 187 (z = 1.93, p < 0.05). In Q4, both models presented similar values, with G1 averaging 88 articles and G2 averaging 79 (z = 0.85, p > 0.05) (Tables 2; A7).
The variability in values reinforces the differences between the models. In Q1, G1 showed more significant variability, with a standard deviation of 613 articles, compared to G2, which had a standard deviation of 198. In Q2, G1’s coefficient of variation (38.49%) was higher than G2’s (28.44%), indicating more significant heterogeneity among APC-charging journals. G2 maintained greater homogeneity in the lower quartiles, with standard deviations of 79 in Q3 and 25 in Q4, while G1 presented 97 and 32, respectively (
Table A7). These results suggest that G1 leads in absolute publication metrics, especially in the upper quartiles, while G2 adopts a more stable and uniform approach in terms of publication volume.
The percentage of cited articles (% Cited) revealed significant differences between the G1 and G2 models, particularly in the top 10% and Q1. In the top 10%, G2 surpassed G1 with an average of 88.75% of articles cited, compared to G1’s 83.36% (z = 3.12, p < 0.05). In Q1, G1 led with an average of 78.55%, while G2 reached 75.88% (z = 2.84, p < 0.05). In Q2, the averages were similar, with G1 at 67.92% and G2 at 66.34% (z = 1.47, p > 0.05). In the lower quartiles, the differences decreased even further: in Q3, G1 had 58.23%, and G2 had 59.01% (z = 0.93, p > 0.05). In Q4, the values were practically identical, with G1 at 43.89% and G2 at 43.77% (z = 0.12, p > 0.05) (
Table A8).
The dispersion analyses highlighted the more significant variability of G1 in the upper quartiles, with a coefficient of variation of 18.22% in Q1, while G2 showed lower variability, with a coefficient of 12.67% (
Table A8). G2 maintained excellent stability in the lower quartiles, with standard deviations of 9.31 in Q3 and 7.88 in Q4, compared to G1’s 11.45 and 9.12, respectively (
Table A8). These results indicate that G2 has an advantage in relative metrics in the most competitive segment (top 10%), but G1 dominates in absolute percentages in the upper quartiles. In the lower quartiles, both models show similar performance, with G2 displaying a slight advantage in terms of stability.
The comparative analysis of metrics between G1 and G2 journals revealed differences reflecting the two models' structural and functional characteristics. While G1 stands out in absolute metrics in the most prestigious segments, G2 exhibits a more balanced and consistent profile in relative metrics.
The analyses carried out highlighted marked differences between APC-charging Open Access journals (G1) and Diamond Open Access (G2) journals in the field of Engineering, considering metrics such as CiteScore, accumulated citations, published articles, and the percentage of cited articles. Overall, G1 journals demonstrated superior performance in absolute metrics, particularly in the upper quartiles (Q1 and Q2), which concentrated most of the citations and published articles. This advantage is evidenced by data such as G1’s average of 11,389 citations in Q1, compared to G2’s 1,577, and G1’s leadership in the number of articles published in the upper quartiles. However, this superiority comes with more significant variability in results, reflecting the heterogeneity of editorial practices and academic impact among APC-charging journals.
The results of this study also point to specific challenges for researchers from less well-funded institutions. While the APC model is associated with greater visibility and impact in absolute metrics, it perpetuates economic barriers that impede equitable access to publishing in higher-prestige journals. This reality is especially problematic for developing countries with limited research funding. Conversely, the Diamond Open Access model offers a more accessible alternative but faces significant challenges in terms of funding and sustainability. The disparities in publication costs and impact indices underscore the need for public policies that encourage the adoption of inclusive models, such as Diamond Open Access, that are aligned to democratize access to scientific knowledge. The literature highlights that solutions to reduce publication costs, as suggested by Oliveira et al. (2023) and Rodrigues et al. (2022), can significantly promote fairer and more accessible editorial practices.
Additionally, G2 journals stood out for their excellent stability and consistency in relative metrics, such as the percentage of cited articles, especially in the lower quartiles and the top 10% segment. In the top 10%, G2 surpassed G1 with 88.75% of articles cited, compared to G1’s 83.36%. Moreover, in the lower quartiles (Q3 and Q4), G2 maintained lower dispersion in the results, proposing a more homogeneous and accessible approach regarding scientific impact. This consistency is critical for democratizing scientific knowledge, particularly in contexts with limited resources.
Thus, the findings point to a trade-off between absolute impact and accessibility. While G1 leads in volumetric metrics in the most prestigious segments, G2 offers more stable and balanced performance, especially in less competitive contexts. These results underscore the importance of both publishing models to meet the diverse demands of the academic community in the field of Engineering, with G1 favoring visibility in high-impact metrics and G2 promoting greater accessibility and sustainability.
Although robust, the present study has significant limitations that must be considered. First, the analysis was limited to journals indexed in the Scopus database, excluding other widely used indexers such as the Web of Science and PubMed, which may limit the breadth and representativeness of the results. Furthermore, the focus on quantitative metrics, such as CiteScore, accumulated citations, the number of published articles, and the percentage of cited articles, did not include qualitative factors, such as societal impact or the relevance of journals in specific regional contexts, which could complement the analyses. Another relevant point is that categorizing journals by quartiles may not adequately capture the nuances between journals near the boundaries of those categories, especially in intermediate quartiles. Finally, a significant limitation is the scarcity of literature allowing direct comparisons with previous studies since comprehensive, systematic investigations comparing APC-based Open Access and Diamond Open Access journals in Engineering are practically nonexistent. This gap restricts the possibility of contextualizing the results within a broader academic landscape and reinforces the need for future studies that expand the methodological scope and explore more integrated conceptual approaches.
The findings of this study reinforce the urgency of debating access policies and publication costs in the academic arena. While APC-based journals offer greater visibility and impact in absolute metrics, charging fees perpetuates economic barriers that limit accessibility, especially in contexts with fewer resources. The promotion of the Diamond Open Access model, in turn, requires more excellent institutional and political support, including sustainable funding, to increase its representation in the publishing market and truly democratize access to knowledge.
Future studies are encouraged to broaden the analysis to include journals indexed in other databases, such as Web of Science and PubMed, to enhance the scope of the conclusions. Incorporating qualitative indicators, such as societal impact and regional relevance, would also be valuable, providing a more holistic perspective on the role of Open Access journals. Investigating external factors, such as editorial policies, indexing practices, and regional influences, can offer a more comprehensive understanding of the scientific publishing market dynamics. Such approaches will contribute to developing strategies that foster a more inclusive and equitable editorial system.