24 Matching Annotations
  1. Jan 2026
  2. milenio-nudos.github.io milenio-nudos.github.io
    1. Turning to comparability across groups, invariance results were mixed across assessments. In PISA, the bidimensional model achieved configural, metric, and scalar invariance across countries, providing strong evidence that the construct is comparable across participating education systems (supporting H2). In ICILS, the model reached configural and metric invariance but did not meet scalar invariance criteria across countries, suggesting that cross country mean differences should be interpreted cautiously and reinforcing the importance of explicitly evaluating between country comparability before drawing substantive conclusions (partial support for H2). These results indicate that even when the same two-dimensional structure is recoverable, the degree of cross national comparability may depend on assessment specific design features and item functioning. In contrast, gender invariance results were consistently supportive. Both assessments achieved scalar invariance across gender, indicating that General and Specialized DSE can be compared meaningfully between boys and girls within each dataset (supporting H3). This finding strengthens the validity of gender comparisons in DSE and suggests that observed gender gaps in DSE are unlikely to be driven by measurement non-equivalence, at least within the tested frameworks.

      I think that here a raference to Hristov et al., and Campos & Shcerer could add deepness. Hristov done a invariance analysis between gender and migration status groups, and campos & Scherer applied the alginment method to the DSE construct

    2. Gender differences were clearly dimensional specific: girls tended to report slightly higher General DSE, whereas boys reported substantially higher Specialized DSE across most education systems. This pattern indicates that gender disparities depend on the type of digital task domain being evaluated (supporting H4).

      ... consistenly with previous findings (Gerbhart, year)

    3. or computational

      I would remove the "computational" adjective, as creating presentation or most information-communication activities implies some computational proficiency —if done on a computer

    4. “logical solution”, “programming”). Additionally, it showed a positive residual correlation with “Privacy settings”

      Above and further on we use simple quotes, we should be consistent with that

    5. The high covariance indicates that students perceive multimedia creation and web development as a unified dimension of ‘content creation’, a finding that aligns with the DigComp framework (‘3.1 Develop Digital Content’).

      Maybe we can refeer back to the items-digcomp table

    6. Official subset of data reach around 393,000 students, nested on 52 countries (maily OECD).

      Is'nt necessary to specify that this subset corresponds to the countries that applied the ICT questionnarie?

  3. Dec 2025
  4. milenio-nudos.github.io milenio-nudos.github.io
    1. In turn, the literature consistently reports that students with low expectations of specialized self-efficacy sometimes score higher on standardized tests of digital skills

      No es así, mayor spec DSE = menor cil; la general si tiene una relación al menos en ICILS. Al respecto Campos y Scherer

    2. The dimensions of the DSE are not only relevant to contrast for theoretical reasons, but also because consideration of this approach has an impact on the distribution among groups

      The dimensions of the DSE are not only relevant to contrast for theoretical reasons, but also because the application of this approach has an impact on the distribution among groups

    3. The problem is that recent definitions of Digital Competence are no longer framed within a bidimensional approach to self-efficacy with technologies.

      Creo que como está escrito se entiende que el marco de competencias se encuadra en la autoeficacia bidimensional, cuando es al revés. Quizás se podría plantear la frase al contrario: The problem is that the bidimensional aproach to self-efficacy is no longer suitable with recent definitions of Digital competence

    4. we aim to clarify whether differences in DSE are consistent across contexts or instead a product of how assessments operationalize the construct.

      Me hace ruido la segunda afirmación. No tenemos una hipótesis sobre el efecto de la operacionalización de los constructos. El fallo de la invarianza puede deberse a muchas cosas (diferencias culturales, fallos de aplicación, errores de medición, países muy disímiles con el resto), y creo que esto plantea algo binario (o es consistente o está mal operacionalizado).

  5. Oct 2025
  6. milenio-nudos.github.io milenio-nudos.github.io
    1. (ulfert-blank_assessing_2022?) suggests to work with a unified construct denominated Digital Self-efficacy (hereinafter DSE) to reach a high-level research on this issue. Considering the gaps and inconsistencies in previous measurements, (ulfert-blank_assessing_2022?) points out that DSE construct have to

      Creo que debemos hacer una distinción mejor para que se entienda esto (de partida ya estamos usando la abreviación DSE antes de esta parte). Como nos referimos a las anteriores escalas como mediciones de DSE y hasta el momento no eran escalas que en estricto rigor medían la DSE creo que lleva a la confusión. Diría autoeficacia asociada a la tecnología hasta este punto del paper, así quedaría algo más claro, ya que las anteriores escalas no miden lo mismo.

  7. Sep 2025
  8. milenio-nudos.github.io milenio-nudos.github.io