Current Issue

Vol 10 No 1 (2024)

Articles

  • XML | PDF | downloads: 55 | views: 53 | pages: 1-18

    Objective: To identify the effect of Iron as a preventive and therapeutic agent on depression and other hematological indices by a systematic review and meta-analysis.

    Methods: International databases like Web of Science, PubMed, Cochrane,  International Clinical Trials Registry Platform, Clinicaltrials.gov, and Scopus were searched until 27 July 2024 to identify eligible articles with the appropriate Medical Subject Headings (MeSH).  The risk of bias tool for randomized trials (RoB 2) was used for precise assessment.  Heterogeneity was determined using Cochran’s Q-test and the I2 index. To assess source of heterogeneity, meta-regression was used. The pooled standardized mean difference (PSMD) was calculated by considering the random effects model.

    Results: of 2154 studies,14 studies were included in systematic review and 6 studies were excluded from analysis due to lack of data for calculating PSMD and finally, 8 studies were included in meta-analysis. Based on the results, iron therapy led to improvement in depression symptoms (PSMD = -0.18; 95% CI: -0.32 to -0.03). The iron therapy led to increasing the blood level of Iron (PSMD = 0.57; 95% CI: 0.19 to 0.95), Ferritin (PSMD = 0.55; 95% CI: 0.25 to 0.85), HCT (PSMD = 0.40; 95% CI: 0.18 to 0.61), MCV (SMD = 0.67; 95% CI: 0.18 to 1.15) and Transferrin saturation (PSMD:0.26; 95% CI: 0.02 to 0.50). Based on the meta-regression result, the sample size, participant age, and publication year had no significant role in heterogeneity between studies.

    Conclusion: The use of iron supplements in patients with depression can be considered. However, there is a need to conduct further studies involving various kinds of depression.

    Keywords: Depression; Iron; treatment; prevention

  • XML | PDF | downloads: 16 | views: 23 | pages: 19-32

    Abstract

    Background: This systematic review has been undertaken in order to assess the effects of hypocaloric, high-protein diets on weight loss and cardiovascular risk factors such as serum lipid levels in metabolically healthy obese adults. The primary outcomes measured include changes in pre- and post- diet mean BMI, LDL-C, HDL-C, TAG, and TC levels. 

    Method: Four databases including: Embase, MEDLINE (via PubMed), Cochrane and Web of Science were searched with no restrictions on language or publication period. Clinicaltrials.gov was also searched in order to identify unpublished or on-going studies.

    Results: Three of four studies included in this systematic review noted a significantly greater loss in pre- and post- diet mean BMI levels in the hypocaloric, high-protein diet group as compared to hypocaloric, non-high protein diets (control). Whilst pre- and post-diet mean LDL-C, HDL-C, TAG, and TC levels did not differ significantly among hypocaloric, high-protein and control diet groups.

    Conclusion: Hypocaloric, high-protein diets had an unclear effect on blood-lipid levels as compared to control. Weight loss however was significantly greater in the hypocaloric, high-protein group as compared to other hypocaloric, non-high-protein diet groups. 

  • XML | PDF | downloads: 35 | views: 65 | pages: 33-52

    The COVID-19 pandemic has significantly impacted the Middle East and North Africa (MENA) region, with over twenty-eight million cases and 800,000 deaths reported as of August 2023. Spatial analysis can help identify factors associated with the high death toll and develop targeted interventions to reduce the virus's spread and improve health outcomes. The study uses GIS-based analysis and geostatistical models to analyze the COVID-19 death rate in MENA countries. It identifies demographic, medical, and socioeconomic factors as key factors. The research suggests that hospital bed allocation, unemployment rate, and overall immunizations could be key factors influencing the death rate. The study also highlights the fragility of healthcare infrastructure in developing nations, with poor allocation and insufficient support for vulnerable groups. The findings suggest a positive correlation between death rate, hospital bed allocation, unemployment rate, and vaccination doses, highlighting the importance of social isolation measures. The estimated OLS model, which considers variables like hospital beds, unemployment rate, and total vaccine doses, was found to explain 73.46% of COVID-19 death cases across the Middle East and Africa (MENA). However, the model's spatial autocorrelation was found, requiring the development of spatial lag regression (SLM) and spatial error regression (SEM) models. The geographically weighted regression (GWR) and multiscale geographically weighted regression (MGWR) models showed higher  and lower AIC than global models, with the GWR model showing a clear pattern of impact in the northwestern area and the MGWR model showing a moderate impact in the northwestern area. Understanding COVID-19 death incidence is crucial for controlling transmission. This work could be valuable in future studies.

  • XML | PDF | downloads: 24 | views: 15 | pages: 56-63

    Introduction: Survival analysis including cure fraction subgroups is heavily used in different fields like economics, engineering and medicine. The main core of the analysis is to understand the relationship between the covariates and the survival function taking into consideration censoring and long-term survival. The analysis can be performed using traditional statistical models or neural networks. Recently, neural network has attracted attention in analyzing lifetime data due to its ability of efficiently estimating the survival function under the existence of complex covariates. To the best of our knowledge, this is the first time a parametric neural network is introduced to analyze mixture cure fraction models.

    Methods: In this paper, we introduce a novel neural network based on mixture cure fraction Weibull loss function.

    Results: Alzheimer disease dataset as long as synthetic dataset are used to study the efficiency of the model. We compared the results using goodness of fit methods in both datasets with Weibull regression.

    Conclusion: The proposed neural network has the flexibility of analyzing continuous data without discretization. Also, it has the advantage of using Weibull distribution properties. For example, it can analyze data with different hazard rates (monotonically decreasing, monotonically increasing and constant). comparing the results with Weibull regression, the proposed neural network performed better.

  • XML | PDF | downloads: 59 | views: 26 | pages: 64-81

    Introduction: Stress-strength models has achieved considerable attention in recent years due to its applicability in various areas like engineering, quality control, biology, genetics, medicine etc. This paper investigates estimation of the stress-strength reliability parameter  in two-parameter exponential distributions under progressively type-II censored samples.

    Methods: The maximum likelihood and the best linear unbiased estimates of  are obtained, and the Bayes estimates of  are computed under the squared error, linear-exponential, and Stein loss functions. Also, confidence intervals of stress-strength reliability such as the bootstrap confidence intervals, highest posterior density credible interval, and confidence interval based on the generalized pivotal quantity are obtained. Results: Using a simulation study, the point estimators and confidence intervals are evaluated and compared. A set of real data is presented for better clarification of the issue.

    Conclusion: The results demonstrated that with increasing the sample size, in almost cases the ERs of all the estimators decrease. Also, in almost all cases the Bayes estimator under the LINEX loss function has smaller ER than the other estimators. Based on our simulation, the ELs of all intervals tend to decrease when the sample size increases. Moreover, the HPD confidence intervals are shorter than the others intervals for all the values of .

     

     

  • XML | PDF | downloads: 14 | views: 20 | pages: 82-97

    Abstract

    Background: Hypertension is a serious chronic disease and an important risk factor for many health problems. this study aimed to investigate the factors associated with hypertension using a decision-tree algorithm.

    Methods: this cross-sectional study was conducted in Kharameh city between 2014-2017 through census. The study included 2510 hypertensive and 7840 non-hypertensive individuals. 70% of the cases were randomly allocated to the training dataset for establishing the decision tree, while the remaining 30% were used as the testing dataset for performance evaluation of the decision-tree. Two models were assessed. In the first model (model I), 15 variables including age, gender, body mass index, years of education, Occupation status, marital status, family history of hypertension, physical activity, total energy, number of meals, salt, oil type, drug use, alcohol use and smoke entered in to the model. in the second model (model II) 16 variables including age, gender, BMI and Blood factors as HCT, MCHC, PLT, FBS, BUN, CERAT, TG, CHOL, ALP, HDL, GGT, LDL and SG were considered. a receiver operating characteristic (ROC) curve was applied to assess the validation of the models.

     

    Results: The accuracy, sensitivity, specificity, and area under the ROC curve (AUC) are important metrics to evaluate the performance of a decision tree model. For model I, the accuracy, sensitivity, specificity and area under the ROC curve (AUC) value were 79.24%, 82.41%, 78.24% and 0.80, respectively. for model II, the corresponding values were 79.50%, 81.03%, 79.02% and 0.80, respectively.

     

    Conclusion: We have suggested a decision tree model to identify the risk factors associated with hypertension. This model can be useful for early screening and improving preventive and curative health services in health promotion.

  • XML | PDF | downloads: 28 | views: 61 | pages: 98-110

    Background: One important aim of population pharmacokinetics (PK) and pharmacodynamics (PD) is the identification and quantification of the relationships between the parameter and covariates to improve the predictive performance of the population PK/PD modeling. Several new mathematical methods have been developed in pharmacokinetics in recent years which indicated that the machine learning-based methods are an appealing tool for analyzing PK/PD data.

    Methods: This simulation-base study aims to determine whether machine learning methods, including support vector regression (SVR) and Random forest (RF) which are specifically designed for the prediction of blood serum concentration or clearance, could be an effective replacement for the Lasso covariate selection method in nonlinear mixed effect models. Accordingly, the predictive performance of penalized regression Lasso, SVR, and RF regression was compared to detect the associations between clearance and model covariates. PK data was simulated from a one-compartment model with oral administration. Covariates were created by sampling from a multivariate standard normal distribution with different levels of correlation. The true covariates influenced only clearance at different magnitudes. Lasso, RF, and SVR were compared in terms of mean absolute prediction error(MAE).

    Results: The results show that SVR performed the best in small data sets, even in those in which a high correlation existed between covariates. This makes SVR a promising method for covariate selection in nonlinear mixed-effect models.

    Conclusion: The Lasso method offered a higher MAE, making it less promising than RF and SVR, especially when dealing with a high correlation between covariates and a low number of individuals.

  • XML | PDF | downloads: 18 | views: 24 | pages: 111-123

    Background & Aim: In real-world datasets, outliers are a common occurrence that can have a significant impact on the accuracy and reliability of statistical analyses. Detecting these outliers and developing robust models to handle their presence is a crucial challenge in data analysis. For instance, natural images may have complex distributions of values due to environmental factors like noise and illumination, resulting in objects with overlapping regions and non-trivial contours that cannot be accurately described by Gaussian mixture models. In many real life applications, observed data always fall in bounded support regions. This leads to the idea of bounded support mixture models. Motivated by the aforementioned observations, we introduce a bounded multivariate cntaminated normal distribution for fitting data with non-Gaussian distributions, asymmetry, and bounded support which makes finite mixture models more robust to fitting, since rare observations are given less importance in calculations.

    Methods & Materials: A family of finite mixtures of bounded multivariate contaminated normal distributions is introduced. The model is well-suited for computer vision and pattern recognition problems due to its heavily-tailed and bounded nature, providing flexibility in modeling data in the presence of outliers. A feasible expectation-maximization algorithm is developed to compute the maximum likelihood estimates of the model parameters using a selection mechanism.

    Results: The proposed methodology is validated by conducting experiments on two real natural skin cancer images. We estimate the parameters by the proposed expectation-maximization algorithm. The obtained results shown that the proposed model showed that the proposed method has successfully enhanced accuracy in segmenting skin lesions.

    Conclusion: The reliable model-based clustering using finite mixtures of bounded multivariate contaminated normal distributions is introduced. An expectation-maximization algorithm was created to estimate parameters, with closed-form expressions utilized at the E-step. Practical tests on images for skin cancer detection showed enhanced accuracy in delineating skin lesions.

  • XML | PDF | downloads: 42 | views: 53 | pages: 124-131

    Objective: To estimate the effectiveness of two-dose COVID-19 vaccination in reducing hospitalization, accounting for complex confounding factors in observational studies.

    Methods: Researchers applied propensity score methods to adjust for confounding variables, comparing their performance to traditional covariate adjustment methods. Multiple Logistic Regression and Propensity Score Matching were employed to analyze the data, ensuring a balanced comparison between vaccinated and unvaccinated groups.

    Results: Both analytical methods demonstrated a significant reduction in the likelihood of hospitalization among vaccinated individuals. The adjusted odds ratios (OR) were 0.29 (95% CI: 0.26, 0.31) via logistic regression and 0.32 (95% CI: 0.30, 0.34) using propensity score matching.

    Conclusions: The study confirms the effectiveness of two-dose COVID-19 vaccination in decreasing hospitalization. It highlights the importance of using meticulous approaches like propensity score methods to assess real-world impacts in complex observational data settings.

View All Issues