Journal of Minimally Invasive Surgery 2023; 26(3): 97-107

Published online September 15, 2023

https://doi.org/10.7602/jmis.2023.26.3.97

© The Korean Society of Endo-Laparoscopic & Robotic Surgery

Correspondence to : Woojoo Lee

Department of Public Health Science, Graduate School of Public Health, Seoul National University, 1 Gwanakro, Gwanak-gu, Seoul 08826, Korea

E-mail: lwj221@snu.ac.kr

https://orcid.org/0000-0001-7447-7045

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Directed acyclic graphs (DAGs) are useful tools for visualizing the hypothesized causal structures in an intuitive way and selecting relevant confounders in causal inference. However, in spite of their increasing use in clinical and surgical research, the causal graphs might also be misused by a lack of understanding of the central principles. In this article, we aim to introduce the basic terminology and fundamental rules of DAGs, and DAGitty, a user-friendly program that easily displays DAGs. Specifically, we describe how to determine variables that should or should not be adjusted based on the backdoor criterion with examples. In addition, the occurrence of the various types of biases is discussed with caveats, including the problem caused by the traditional approach using *p*-values for confounder selection. Moreover, a detailed guide to DAGitty is provided with practical examples regarding minimally invasive surgery. Essentially, the primary benefit of DAGs is to aid researchers in clarifying the research questions and the corresponding designs based on the domain knowledge. With these strengths, we propose that the use of DAGs may contribute to rigorous research designs, and lead to transparency and reproducibility in research on minimally invasive surgery.

**Keywords** Directed acyclic graphs, Causal diagrams, Confounder selection, Backdoor criterion, *d*-Separation

In medical research, causality has been dealt with as one of the utmost importance. For instance, studies regarding minimally invasive surgery (MIS) are based on the causal hypothesis that such a surgical method will reduce the burden on patients and lead to a decrease in morbidity and rapid functional recovery compared to conventional open surgical treatments [1,2]. However, randomized controlled trials, considered as the gold standard for causality [3–5], are often difficult to implement owing to ethical or economic issues. For these reasons, researchers frequently conduct observational studies, but these are often regarded as providing association results, not causality [3,4]. The main reason for losing causal meaning is the failure to compare like with like. For example, various baseline characteristics such as age and socioeconomic status may not be balanced between the two groups. To make them comparable, confounders should be cautiously adjusted to identify the causal effect of interest. Nonetheless, it is a difficult task to determine which variables to adjust or not to adjust in observational studies. Traditional variable selection methods such as backward elimination and stepwise selection have been criticized because they often lead to biased causal effect estimates. Other variable selection methods are also subject to the same criticism. In particular, variable selection for prediction purposes may not be optimal for causal inference.

For variable selection with a focus on causal inference, we should determine which variables confound the relationship between the treatment (or exposure) variable and the outcome variable of interest. For this understanding, we need domain knowledge about the causal structure among the research variables. Directed acyclic graphs (DAGs) [4,7–9] have been widely used to visualize the domain knowledge, show which variables are confounders to be adjusted, and indicate when the causal effect of interest is nonparametrically identified in observational studies. Moreover, DAGs aid researchers in clarifying possible biases from current research designs such as selection bias and measurement error bias. Consequently, the use of DAGs may contribute to transparency and reproducibility in surgical research. In fact, a growing number of clinical journals have requested their inclusion in either the main body or supplementary material. Notwithstanding their recent extensive use, however, the lack of a clear understanding of the essential principles of DAGs may lead to their incorrect use [10].

In this article, we aim to describe the fundamental principles of DAGs and highlight their strengths in surgical research using practical examples. For this purpose, the current paper consists of the following three steps. In the first step, we briefly review the basic concepts and rules of DAGs. Especially, we link them to clinical examples from the literature on MIS. In the next step, we demonstrate potential pitfalls caused by the traditional

In this section, we briefly review the basic concepts and terminologies pertaining to DAGs and introduce important criteria that help readers specify research models.

Mathematically, a graph is defined as a set of nodes and edges. In a graph, nodes may or may not be connected by edges. DAGs additionally have two important characteristics. First, a

We consider simple examples of DAGs for explanatory purposes. In Fig. 1A, we see the arrow from

It is worth mentioning that we do not assume any functional form for the causal relationship [7–9]. In a nutshell, an arrow from

To deal with causal relationships generally, it is useful to categorize nodes as shown below. Consider first Fig. 2. We see that the graph consists of four nodes with three arrows. In this graph, the node where the arrow starts from

The key information of DAGs lies in the absence of the arrow and its direction because they represent the causal structure of research variables based on subject matter knowledge. We introduce three fundamental configurations that constitute the basis of the general forms of DAGs. These consist of three nodes and two arrows. The first configuration is a

The second configuration is called a fork, which includes two arrows stemming from the middle node [9]. An example of a fork is suggested in Fig. 4, where

A collider, the third configuration, denotes the middle node into which two arrows are directed [9]. In Fig. 5,

We introduced some basic and important components of DAGs and how to connect (unconditional and conditional) independence with simple DAG structures. Nonetheless, DAGs used in actual research are not as simple as the examples suggested above. To determine conditional independence among variables in general, the following rule,

As can be seen, these two rules are already mentioned in the previous examples. If a pair of nodes is

Fig. 6 shows a DAG including chains, forks, and colliders. In the causal diagram, the path between

Although it is widely recognized that confounding should be considered properly in research designs, it is not easy to decide which variables should be adjusted in each study. DAGs can help researchers to identify them. The intuition behind the variable selection based on DAGs is that we block non-causal paths (the so-called

Should we adjust

When researchers carry out observational studies, there may be cases where some variables are omitted or not measured owing to reasons such as protection of personal information or difficulty of measurement. If the nodes are not all present in a DAG, is it possible to investigate the causal effect in this situation? Fortunately, it is still possible in many cases. More precisely, there is a general condition to estimate the effect of

For a more concrete explanation, we revisit the example described in Fig. 6. Suppose that our aim is to investigate the causal effect of

An important question is to ask whether such

In practice, researchers conventionally select confounders based on the statistical significance using

For the moment, assume that we collect data without

For this numerical study, Poisson regression was used and relative risks (RR) were reported. In Table 1, the estimated RR (0.979) is reported as the average of 1,000 estimates obtained by each generated data set, and the bias presents the difference between RR and the true value of 1. The result indicates that the bias of RR obtained by the model adjusting

It is important to note that satisfying the backdoor criterion in DAGs refers to

Even though we have explained the fundamental rules such as

When users access the website, Fig. 10 shows what is displayed on the screen. A new node is generated by clicking on an empty spot on the graph and typing a variable name. In addition, a directed arrow between two nodes can be inserted by clicking the ancestor node first and then the other node. If you click once more on the ancestor node, the arrow is deleted. On the left side, menus for basic operations are displayed. We focus on the essential ones for beginners. The first menu on the left shows

The simple examples illustrated in the previous section are revisited with DAGitty. First, Fig. 11 shows a chain structure. As expected, we see that

Fig. 12 represents a fork. The violet-colored biasing path between

Fig. 13 presents the collider case and

Next, the example of M-bias suggested in Fig. 14, which consists of five variables (

DAGitty has been used in several studies regarding MIS, and we explore an example from [25]. The authors were interested in the causal effect of surgery methods (open vs. MIS) on venous thromboembolism (

In the graph, there are three exogenous research variables that are not affected by other variables in the model: patients’ age (

In this situation,

Then, what if we adjust

In this section, we present a realistic example of DAG from [26]. Their DAG included 14 variables as shown in Fig. 17. In their study, the intervention was defined as screening for modifiable high-risk factors combined with targeted interventions, and the outcome was postoperative complications in patients undergoing colorectal cancer surgery. Unlike previous examples, the DAG in Fig. 17 included unobserved variables colored gray:

To answer the question, assume that we observed all variables for the moment. In this case, DAGitty suggests three minimal sufficient adjustment sets as shown in Fig. 18. Note that the parents of both

Specifically, the following logistic regression can be used for estimating the propensity score.

However, some variables (

The usefulness of DAG is often beyond selecting confounders. If the DAG were displayed prior to data collection and used in research design, readers might recognize the necessity of

In this article, the basic terminology and rules of DAGs were introduced. In addition, we focused on how the fundamental concepts and principles such as

Specifically, the strength of DAGs lies in providing researchers with

Nonetheless, DAGs also have limitations despite their usefulness and applicability in empirical studies. First, DAGs do not contain information about the functional forms of variables [27]. As shown in the previous examples, the graphs do not assume specific functions such as linear or quadratic. This indicates that researchers should avoid misspecifying the functional forms even with properly selected confounders using DAGs. Therefore, researchers may modify the models based on statistical significance, or employ various machine learning methods that allow a wide range of functional forms. Second, although the various types of biases using DAGs were addressed, their magnitudes are not provided by the graphs themselves [5,27,28]. This implies that additional domain knowledge is required to evaluate the extent of the bias. Third, the causal directions in DAGs are not always clear even when based on theory. In fact, there may be cases where the directions of arrows are somewhat ambiguous, especially when both directions are reasonable. This leads to several different graphs on the same research problem [27,28]. In particular, this issue may arise as the number of variables increases. Concerning this issue, the disjunctive cause criterion was suggested as one of the alternative strategies for confounder selection [16,29]. The criterion states that a sufficient control for confounding is obtained by (1) adjusting all causes of the treatment (exposure), outcome, or both, (2) excluding instrumental variables, and (3) including proxy variables for unmeasured variables that are common causes of treatment and outcome [16]. These practical rules may help researchers select confounders and reduce the risk of unmeasured confounding even when the number of variables is large.

Despite these limitations, DAGs play a major role in shaping research questions and are useful in

Conceptualization, Investigation: SB, WL

Visualization: SB

Writing–original draft: SB, WL

Writing–review & editing: SB, WL

All authors read and approved the final manuscript.

All authors have no conflicts of interest to declare.

None.

The data presented in this study are available on request from the corresponding author.

- Nezhat F. Minimally invasive surgery in gynecologic oncology: laparoscopy versus robotics. Gynecol Oncol 2008;111(2 Suppl):S29-S32.
- McAfee PC, Phillips FM, Andersson G, et al. Minimally invasive spine surgery. Spine (Phila Pa 1976) 2010;35(26 Suppl):S271-S273.
- Hernán MA. A definition of causal effect for epidemiological research. J Epidemiol Community Health 2004;58:265-271.
- Pearl J, Glymour M, Jewell NP. Causal inference in statistics: a primer. John Wiley & Sons; 2016.
- Etminan M, Collins GS, Mansournia MA. Using causal diagrams to improve the design and interpretation of medical research. Chest 2020;158(1S):S21-S28.
- Tennant PWG, Murray EJ, Arnold KF, et al. Use of directed acyclic graphs (DAGs) to identify confounders in applied health research: review and recommendations. Int J Epidemiol 2021;50:620-632.
- Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology 1999;10:37-48.
- Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669-688.
- Pearl J. Causality. Cambridge University Press; 2009.
- Suzuki E, Shinozaki T, Yamamoto E. Causal diagrams: pitfalls and tips. J Epidemiol 2020;30:153-162.
- VanderWeele TJ, Robins JM. Directed acyclic graphs, sufficient causes, and the properties of conditioning on a common effect. Am J Epidemiol 2007;166:1096-1104.
- VanderWeele TJ, Hernán MA, Robins JM. Causal directed acyclic graphs and the direction of unmeasured confounding bias. Epidemiology 2008;19:720-728.
- Greenland S, Pearl J. Article on causal diagrams. In: Boslaugh S, editor. Encyclopedia of epidemiology. Sage Publications; 2007. p. 149-156.
- Sauer BC, Brookhart MA, Roy J, VanderWeele T. A review of covariate selection for non-experimental comparative effectiveness research. Pharmacoepidemiol Drug Saf 2013;22:1139-1145.
- Greenland S, Pearce N. Statistical foundations for model-based adjustments. Annu Rev Public Health 2015;36:89-108.
- VanderWeele TJ. Principles of confounder selection. Eur J Epidemiol 2019;34:211-219.
- Liu W, Brookhart MA, Schneeweiss S, Mi X, Setoguchi S. Implications of M bias in epidemiologic studies: a simulation study. Am J Epidemiol 2012;176:938-948.
- Rothman KJ, Greenland S, Lash TL. Modern epidemiology. Lippincott Williams & Wilkins; 2008.
- Greenland S. Quantifying biases in causal models: classical confounding vs collider-stratification bias. Epidemiology 2003;14:300-306.
- Ding P, Miratrix LW. To adjust or not to adjust? Sensitivity analysis of M-bias and butterfly-bias. J Causal Inference 2015;3:41-57.
- Lee BK, Lessler J, Stuart EA. Improving propensity score weighting using machine learning. Stat Med 2010;29:337-346.
- Westreich D, Lessler J, Funk MJ. Propensity score estimation: neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression. J Clin Epidemiol 2010;63:826-833.
- Cannas M, Arpino B. A comparison of machine learning algorithms and covariate balance measures for propensity score matching and weighting. Biom J 2019;61:1049-1072.
- Textor J. Drawing and analyzing causal DAGs with DAGitty. 2015 Aug 19 [Preprint]. DOI: https://doi.org/10.48550/arXiv.1508.04633.
- Kahr HS, Christiansen OB, Høgdall C, et al. Endometrial cancer does not increase the 30-day risk of venous thromboembolism following hysterectomy compared to benign disease: a Danish National Cohort Study. Gynecol Oncol 2019;155:112-118.
- Bojesen RD, Grube C, Buzquurz F, Miedzianogora RE, Eriksen JR, Gögenur I. Effect of modifying high-risk factors and prehabilitation on the outcomes of colorectal cancer surgery: controlled before and after study. BJS Open 2022;6:zrac029.
- Digitale JC, Martin JN, Glymour MM. Tutorial on directed acyclic graphs. J Clin Epidemiol 2022;142:264-267.
- Austin AE, Desrosiers TA, Shanahan ME. Directed acyclic graphs: an under-utilized tool for child maltreatment research. Child Abuse Negl 2019;91:78-87.
- VanderWeele TJ, Shpitser I. A new criterion for confounder selection. Biometrics 2011;67:1406-1413.

Journal of Minimally Invasive Surgery 2023; 26(3): 97-107

**Published online** September 15, 2023 https://doi.org/10.7602/jmis.2023.26.3.97

Copyright © The Korean Society of Endo-Laparoscopic & Robotic Surgery.

Institute of Health & Environment, Seoul National University, Seoul, Korea

Department of Public Health Sciences, Graduate School of Public Health, Seoul National University, Seoul, Korea

**Correspondence to:**Woojoo Lee

Department of Public Health Science, Graduate School of Public Health, Seoul National University, 1 Gwanakro, Gwanak-gu, Seoul 08826, Korea

E-mail: lwj221@snu.ac.kr

https://orcid.org/0000-0001-7447-7045

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Directed acyclic graphs (DAGs) are useful tools for visualizing the hypothesized causal structures in an intuitive way and selecting relevant confounders in causal inference. However, in spite of their increasing use in clinical and surgical research, the causal graphs might also be misused by a lack of understanding of the central principles. In this article, we aim to introduce the basic terminology and fundamental rules of DAGs, and DAGitty, a user-friendly program that easily displays DAGs. Specifically, we describe how to determine variables that should or should not be adjusted based on the backdoor criterion with examples. In addition, the occurrence of the various types of biases is discussed with caveats, including the problem caused by the traditional approach using *p*-values for confounder selection. Moreover, a detailed guide to DAGitty is provided with practical examples regarding minimally invasive surgery. Essentially, the primary benefit of DAGs is to aid researchers in clarifying the research questions and the corresponding designs based on the domain knowledge. With these strengths, we propose that the use of DAGs may contribute to rigorous research designs, and lead to transparency and reproducibility in research on minimally invasive surgery.

**Keywords**: Directed acyclic graphs, Causal diagrams, Confounder selection, Backdoor criterion, *d*-Separation

In medical research, causality has been dealt with as one of the utmost importance. For instance, studies regarding minimally invasive surgery (MIS) are based on the causal hypothesis that such a surgical method will reduce the burden on patients and lead to a decrease in morbidity and rapid functional recovery compared to conventional open surgical treatments [1,2]. However, randomized controlled trials, considered as the gold standard for causality [3–5], are often difficult to implement owing to ethical or economic issues. For these reasons, researchers frequently conduct observational studies, but these are often regarded as providing association results, not causality [3,4]. The main reason for losing causal meaning is the failure to compare like with like. For example, various baseline characteristics such as age and socioeconomic status may not be balanced between the two groups. To make them comparable, confounders should be cautiously adjusted to identify the causal effect of interest. Nonetheless, it is a difficult task to determine which variables to adjust or not to adjust in observational studies. Traditional variable selection methods such as backward elimination and stepwise selection have been criticized because they often lead to biased causal effect estimates. Other variable selection methods are also subject to the same criticism. In particular, variable selection for prediction purposes may not be optimal for causal inference.

For variable selection with a focus on causal inference, we should determine which variables confound the relationship between the treatment (or exposure) variable and the outcome variable of interest. For this understanding, we need domain knowledge about the causal structure among the research variables. Directed acyclic graphs (DAGs) [4,7–9] have been widely used to visualize the domain knowledge, show which variables are confounders to be adjusted, and indicate when the causal effect of interest is nonparametrically identified in observational studies. Moreover, DAGs aid researchers in clarifying possible biases from current research designs such as selection bias and measurement error bias. Consequently, the use of DAGs may contribute to transparency and reproducibility in surgical research. In fact, a growing number of clinical journals have requested their inclusion in either the main body or supplementary material. Notwithstanding their recent extensive use, however, the lack of a clear understanding of the essential principles of DAGs may lead to their incorrect use [10].

In this article, we aim to describe the fundamental principles of DAGs and highlight their strengths in surgical research using practical examples. For this purpose, the current paper consists of the following three steps. In the first step, we briefly review the basic concepts and rules of DAGs. Especially, we link them to clinical examples from the literature on MIS. In the next step, we demonstrate potential pitfalls caused by the traditional

In this section, we briefly review the basic concepts and terminologies pertaining to DAGs and introduce important criteria that help readers specify research models.

Mathematically, a graph is defined as a set of nodes and edges. In a graph, nodes may or may not be connected by edges. DAGs additionally have two important characteristics. First, a

We consider simple examples of DAGs for explanatory purposes. In Fig. 1A, we see the arrow from

It is worth mentioning that we do not assume any functional form for the causal relationship [7–9]. In a nutshell, an arrow from

To deal with causal relationships generally, it is useful to categorize nodes as shown below. Consider first Fig. 2. We see that the graph consists of four nodes with three arrows. In this graph, the node where the arrow starts from

The key information of DAGs lies in the absence of the arrow and its direction because they represent the causal structure of research variables based on subject matter knowledge. We introduce three fundamental configurations that constitute the basis of the general forms of DAGs. These consist of three nodes and two arrows. The first configuration is a

The second configuration is called a fork, which includes two arrows stemming from the middle node [9]. An example of a fork is suggested in Fig. 4, where

A collider, the third configuration, denotes the middle node into which two arrows are directed [9]. In Fig. 5,

We introduced some basic and important components of DAGs and how to connect (unconditional and conditional) independence with simple DAG structures. Nonetheless, DAGs used in actual research are not as simple as the examples suggested above. To determine conditional independence among variables in general, the following rule,

As can be seen, these two rules are already mentioned in the previous examples. If a pair of nodes is

Fig. 6 shows a DAG including chains, forks, and colliders. In the causal diagram, the path between

Although it is widely recognized that confounding should be considered properly in research designs, it is not easy to decide which variables should be adjusted in each study. DAGs can help researchers to identify them. The intuition behind the variable selection based on DAGs is that we block non-causal paths (the so-called

Should we adjust

When researchers carry out observational studies, there may be cases where some variables are omitted or not measured owing to reasons such as protection of personal information or difficulty of measurement. If the nodes are not all present in a DAG, is it possible to investigate the causal effect in this situation? Fortunately, it is still possible in many cases. More precisely, there is a general condition to estimate the effect of

For a more concrete explanation, we revisit the example described in Fig. 6. Suppose that our aim is to investigate the causal effect of

An important question is to ask whether such

In practice, researchers conventionally select confounders based on the statistical significance using

For the moment, assume that we collect data without

For this numerical study, Poisson regression was used and relative risks (RR) were reported. In Table 1, the estimated RR (0.979) is reported as the average of 1,000 estimates obtained by each generated data set, and the bias presents the difference between RR and the true value of 1. The result indicates that the bias of RR obtained by the model adjusting

It is important to note that satisfying the backdoor criterion in DAGs refers to

Even though we have explained the fundamental rules such as

When users access the website, Fig. 10 shows what is displayed on the screen. A new node is generated by clicking on an empty spot on the graph and typing a variable name. In addition, a directed arrow between two nodes can be inserted by clicking the ancestor node first and then the other node. If you click once more on the ancestor node, the arrow is deleted. On the left side, menus for basic operations are displayed. We focus on the essential ones for beginners. The first menu on the left shows

The simple examples illustrated in the previous section are revisited with DAGitty. First, Fig. 11 shows a chain structure. As expected, we see that

Fig. 12 represents a fork. The violet-colored biasing path between

Fig. 13 presents the collider case and

Next, the example of M-bias suggested in Fig. 14, which consists of five variables (

DAGitty has been used in several studies regarding MIS, and we explore an example from [25]. The authors were interested in the causal effect of surgery methods (open vs. MIS) on venous thromboembolism (

In the graph, there are three exogenous research variables that are not affected by other variables in the model: patients’ age (

In this situation,

Then, what if we adjust

In this section, we present a realistic example of DAG from [26]. Their DAG included 14 variables as shown in Fig. 17. In their study, the intervention was defined as screening for modifiable high-risk factors combined with targeted interventions, and the outcome was postoperative complications in patients undergoing colorectal cancer surgery. Unlike previous examples, the DAG in Fig. 17 included unobserved variables colored gray:

To answer the question, assume that we observed all variables for the moment. In this case, DAGitty suggests three minimal sufficient adjustment sets as shown in Fig. 18. Note that the parents of both

Specifically, the following logistic regression can be used for estimating the propensity score.

However, some variables (

The usefulness of DAG is often beyond selecting confounders. If the DAG were displayed prior to data collection and used in research design, readers might recognize the necessity of

In this article, the basic terminology and rules of DAGs were introduced. In addition, we focused on how the fundamental concepts and principles such as

Specifically, the strength of DAGs lies in providing researchers with

Nonetheless, DAGs also have limitations despite their usefulness and applicability in empirical studies. First, DAGs do not contain information about the functional forms of variables [27]. As shown in the previous examples, the graphs do not assume specific functions such as linear or quadratic. This indicates that researchers should avoid misspecifying the functional forms even with properly selected confounders using DAGs. Therefore, researchers may modify the models based on statistical significance, or employ various machine learning methods that allow a wide range of functional forms. Second, although the various types of biases using DAGs were addressed, their magnitudes are not provided by the graphs themselves [5,27,28]. This implies that additional domain knowledge is required to evaluate the extent of the bias. Third, the causal directions in DAGs are not always clear even when based on theory. In fact, there may be cases where the directions of arrows are somewhat ambiguous, especially when both directions are reasonable. This leads to several different graphs on the same research problem [27,28]. In particular, this issue may arise as the number of variables increases. Concerning this issue, the disjunctive cause criterion was suggested as one of the alternative strategies for confounder selection [16,29]. The criterion states that a sufficient control for confounding is obtained by (1) adjusting all causes of the treatment (exposure), outcome, or both, (2) excluding instrumental variables, and (3) including proxy variables for unmeasured variables that are common causes of treatment and outcome [16]. These practical rules may help researchers select confounders and reduce the risk of unmeasured confounding even when the number of variables is large.

Despite these limitations, DAGs play a major role in shaping research questions and are useful in

Conceptualization, Investigation: SB, WL

Visualization: SB

Writing–original draft: SB, WL

Writing–review & editing: SB, WL

All authors read and approved the final manuscript.

All authors have no conflicts of interest to declare.

None.

The data presented in this study are available on request from the corresponding author.

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

Journal of Minimally Invasive Surgery 2023; 26: 97-107https://doi.org/10.7602/jmis.2023.26.3.97

- Nezhat F. Minimally invasive surgery in gynecologic oncology: laparoscopy versus robotics. Gynecol Oncol 2008;111(2 Suppl):S29-S32.
- McAfee PC, Phillips FM, Andersson G, et al. Minimally invasive spine surgery. Spine (Phila Pa 1976) 2010;35(26 Suppl):S271-S273.
- Hernán MA. A definition of causal effect for epidemiological research. J Epidemiol Community Health 2004;58:265-271.
- Pearl J, Glymour M, Jewell NP. Causal inference in statistics: a primer. John Wiley & Sons; 2016.
- Etminan M, Collins GS, Mansournia MA. Using causal diagrams to improve the design and interpretation of medical research. Chest 2020;158(1S):S21-S28.
- Tennant PWG, Murray EJ, Arnold KF, et al. Use of directed acyclic graphs (DAGs) to identify confounders in applied health research: review and recommendations. Int J Epidemiol 2021;50:620-632.
- Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology 1999;10:37-48.
- Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669-688.
- Pearl J. Causality. Cambridge University Press; 2009.
- Suzuki E, Shinozaki T, Yamamoto E. Causal diagrams: pitfalls and tips. J Epidemiol 2020;30:153-162.
- VanderWeele TJ, Robins JM. Directed acyclic graphs, sufficient causes, and the properties of conditioning on a common effect. Am J Epidemiol 2007;166:1096-1104.
- VanderWeele TJ, Hernán MA, Robins JM. Causal directed acyclic graphs and the direction of unmeasured confounding bias. Epidemiology 2008;19:720-728.
- Greenland S, Pearl J. Article on causal diagrams. In: Boslaugh S, editor. Encyclopedia of epidemiology. Sage Publications; 2007. p. 149-156.
- Sauer BC, Brookhart MA, Roy J, VanderWeele T. A review of covariate selection for non-experimental comparative effectiveness research. Pharmacoepidemiol Drug Saf 2013;22:1139-1145.
- Greenland S, Pearce N. Statistical foundations for model-based adjustments. Annu Rev Public Health 2015;36:89-108.
- VanderWeele TJ. Principles of confounder selection. Eur J Epidemiol 2019;34:211-219.
- Liu W, Brookhart MA, Schneeweiss S, Mi X, Setoguchi S. Implications of M bias in epidemiologic studies: a simulation study. Am J Epidemiol 2012;176:938-948.
- Rothman KJ, Greenland S, Lash TL. Modern epidemiology. Lippincott Williams & Wilkins; 2008.
- Greenland S. Quantifying biases in causal models: classical confounding vs collider-stratification bias. Epidemiology 2003;14:300-306.
- Ding P, Miratrix LW. To adjust or not to adjust? Sensitivity analysis of M-bias and butterfly-bias. J Causal Inference 2015;3:41-57.
- Lee BK, Lessler J, Stuart EA. Improving propensity score weighting using machine learning. Stat Med 2010;29:337-346.
- Westreich D, Lessler J, Funk MJ. Propensity score estimation: neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression. J Clin Epidemiol 2010;63:826-833.
- Cannas M, Arpino B. A comparison of machine learning algorithms and covariate balance measures for propensity score matching and weighting. Biom J 2019;61:1049-1072.
- Textor J. Drawing and analyzing causal DAGs with DAGitty. 2015 Aug 19 [Preprint]. DOI:
https://doi.org/10.48550/arXiv.1508.04633 . - Kahr HS, Christiansen OB, Høgdall C, et al. Endometrial cancer does not increase the 30-day risk of venous thromboembolism following hysterectomy compared to benign disease: a Danish National Cohort Study. Gynecol Oncol 2019;155:112-118.
- Bojesen RD, Grube C, Buzquurz F, Miedzianogora RE, Eriksen JR, Gögenur I. Effect of modifying high-risk factors and prehabilitation on the outcomes of colorectal cancer surgery: controlled before and after study. BJS Open 2022;6:zrac029.
- Digitale JC, Martin JN, Glymour MM. Tutorial on directed acyclic graphs. J Clin Epidemiol 2022;142:264-267.
- Austin AE, Desrosiers TA, Shanahan ME. Directed acyclic graphs: an under-utilized tool for child maltreatment research. Child Abuse Negl 2019;91:78-87.
- VanderWeele TJ, Shpitser I. A new criterion for confounder selection. Biometrics 2011;67:1406-1413.

pISSN 2234-778X

eISSN 2234-5248

eISSN 2234-5248

Journal of Minimally Invasive Surgery 2023;26:97~107

https://doi.org/10.7602/jmis.2023.26.3.97

https://doi.org/10.7602/jmis.2023.26.3.97

© J Minim Invasive Surg

© 2023. The Korean Society of Endo-Laparoscopic & Robotic Surgery.