Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • br Clinical variables gene expression profiles and

    2019-10-02


    Clinical variables, gene expression profiles, and histological pa-rameters have indicated strong biological interactions to BC prog-nosis during recent years (Kourou et al., 2015). Each of the at-tributes can vary by time, and the predictive modeling has to take into account the incremental changes of a patient. Thus, reflective attributes on BC prognosis require recursive data collection, which can be demanding on memory space and time consumption. For example, Gevaert et al. (2006) have conducted BC prognosis based on a dataset which contains 25,000 gene expression values per pa-tient each time a tumor sample is taken. Another prognosis re-search on BC recurrence (Park, Ahn, Kim, & Park, 2014) investi-gated an integrative gene network with up to 108,544 interactions per patient, where each interaction is suspected to be indicative of the outcome. Ensemble techniques are empirically proven effective for BC prognosis. However, when adopted to large data scales, the ensemble models will require a correspondingly large number of weak learners, the computation of which can be time and mem-ory consuming. The characteristics of data and the scales of pre-dictive models have deemed the need for high e ciency and ac-curacy in BC prognosis tools. Thus, real-time adaptable incremen-tal training on an ensemble model is highly desired for practical BC prognosis. Online learning, as an e cient approach to improve the training e ciency, is newly introduced to BC research in recent years. Chu, Wang, and Chen (2016) recently proposed an adaptive online learning framework for practical BC diagnosis. In their re-search, online learning and reinforcement learning models were combined, where the online learning models generates supervised diagnosis results, and a reinforcement learning model is used to adaptively expand the model by adding new features. Stochastic gradient descent (SGD) is used for online learning. Their proposed model enhances the accuracy of BC risk assessment from sequen-tial data and incremental features. Their model indicates the po-tential of using online learning for BC research.
    Ensemble learning is a method to integrate expert perspectives and improve prediction accuracy over its Ifenprodil hemitartrate learners. Ensem-ble models include local integration, bagging, boosting, etc. On-line versions of the ensemble models have been proposed since the 1990s. Dynamic integration (Puuronen, Terziyan, & Tsymbal, 1999) is a variation of the stacked generalization algorithm, which requires a learning phase to collect information, and an applica-tion phase to generate prediction. Compared to online boosting and online bagging, dynamic integration algorithms are less e -cient, and generally require more assumptions about base learn-ers. From the online learning algorithms proposed by Oza (2005), online boosting algorithm generates higher accuracy than online bagging, and approximates o ine boosting algorithms. As an ef-fective online ensemble technique, online boosting is a paradigm that tunes a boosting model in a sequential manner. The original version of boosting algorithms seek to empirically minimize Ifenprodil hemitartrate a loss function, without numerical optimization details (Zhang and Yu, 2005). Thus, theoretical convergence has been an issue for boost-ing algorithms (Duffy & Helmbold, 2002; Friedman, 2001; Mason, Baxter, Bartlett, & Frean, 2000); until 2005 when Zhang et al. pro-posed a gradient descent based boosting algorithm with an opti-mization guarantee under few assumptions (Zhang and Yu, 2005). An OGB algorithm is developed by Beygelzimer, Hazan, Kale, and Luo (2015) as a generalization of Zhang’ s algorithm to the on-line version, which has necessarily shown the empirical conver-gence. Experimental tests have shown an average relative improve-ment of boosting on its base learner which is as high as 20% with weaker base learners. It is also shown from their research that OGB performs differently with a variation of base learning models. Re-search on OGB has been about how to align base learners to a di-rection of minimizing loss. Based on the online gradient boosting (OGB) theory (Beygelzimer et al., 2015), a GAOGB model is pro-posed in this research. However, we emphasize that the adoption of an OGB framework to BC research is nontrivial, which requires tackling practical challenges such as parameter tuning, maintain-ing both adaptiveness and effectiveness in practical dynamic data environments. Also, a benchmark comparison between state-of-art online learning algorithms remains to be revealed.