Journal

Neurocomputing

Fields
Cognitive Neuroscience
Computer Science Applications
Artificial Intelligence
Reviews

308

Interested in reviewing for this journal?
Editors on Publons
Top handling editors on Publons (Manuscripts handled)
Top reviewers on Publons (Manuscripts reviewed in last 12 months)
Endorsed by

Reviews

  • This contribution introduces a new method, the Kernel Flexible Manifold Embedding (KFME), which is a semi-supervised graph-based approach that could map unseen examples to enhance the classification results. This method is an extension of the Flexible Manifold Embedding (FME) proposed by Nie et al [1], relying on a kernel-based reformulation of the FME. The KFME objective function is proven to be jointly convex and an algorithm is given to compute the optimal solution of the objective function. The KFME is more accurate than the FME when the data show a highly nonlinear structure, as it is demonstrated in the experimental study on various datasets. As KFME has 3 regularization parameters, a part of the experimental study are devoted to the evaluation of the method stability w.r.t. those parameter. The last part of the experimental study is dedicated to investigate the stability w.r.t. the graph methods. In all the experimental aspects, the KFME approach shows very good results.

    This contribution shows some very interesting results both on theoretical and applicative side. A possible improvement concerns the section 2 (Related Work): it could ease the reading to explain how the graph similarity matrix S may be computed (page 5) before explaining how the Laplacian matrix is obtained. It could be a generic explanation before introducing the within and between class graph similarity matrices $S_b$ and $S_w$.

    In section 2.1 (Semi-supervised Discriminant Analysis), it is confusing to use the same notation $S_w$ and $S_b$ as these matrices are not computed as the ones defined just before. Also, it is difficult to understand Equation (1) without referring to the original publication of Cai et al [2].

    In the results discussion (Section 4.3), the explanation about the better results on test data for some of the dataset is unclear. What is the "complexity of the use datasets"?

    Otherwise, there is very few minors typo that could be easily corrected. In the 3 equations on the bottom of page 7, the $S_w$, $S_b$ and $S$ are displayed with a normal font, should not they appear in bold face? On page 15, the second sentence of Section 3.4.1 reads "This latter ...", I think it may be "The latter ...". On page 17, the URL of the Yale dataset is not correct, the underscore character does not appear. On page 23, the term "outperformance" is not usual and may be the sentence could reformulated. In the bibliography, several authors have lost letters of their name, this may be due a to a problem with diacritical signs: see for example reference 14 with Rätsch, Schölkopf and Müller. Same thing goes with references 19 and 29.

    [1] F. Nie, D. Xu, I. W.-H. Tsang, and C. Zhang. Flexible manifold embedding: A framework for semi-supervised and unsupervised dimension reduction. Image Processing, 19(7):1921-1932, 2010

    [2] D. Cai, X. He, and J. Han. Semi-supervised discriminant analysis. ICCV, pp 1-7. 2007

    Published in
    Ongoing discussion
  • This revision fairly answers the reviewers' comments of the previous round. The addition of a "related works" section enhances the clarity of the paper. This revision is thus more clear and the positioning of the proposed approach is better defined. In the experimental results, the comparison with other approaches seems to cover most of the major modern algorithms.

    Thank to the new explanations, it becomes more apparent than the selection of a "safe neighborhood" is a critical point. The paper explained that a small neighbor set of $k_0$ elements is hypothesized to contain only safe neighbors. Thus, it seems that $k_0$ is an important hyperparameter of the algorithm. It could be interesting to explicit $k_0$ value in the experimental results section or to make it more apparent in the algorithm box (adding a list of input parameters). But as it is clearly indicated that it is set to 2 or 3 in the Remark 1, it is not mandatory.

    I found some typos which could be corrected:

    • page 7: to obtains -> to obtain

    • page 8: an adaptative neighborhood selection strategy are proposed -> is proposed

    the proposed RLLPE algorithm do not -> does not

    but concern the exploiting -> "but concerns the exploitation" or "but exploits"

    we do not touch -> we do not deal with

    • page 9: fail miserably : miserably seems a bit strong in my opinion, it could be replace with "completely fail" or any equivalent wording.

    Published in
    Ongoing discussion
  • The authors have kindly taken into account the reviewers' suggestions. These modifications improved the clarity of the paper and I have no further comment to made on this revised version.

    Published in
    Ongoing discussion
  • This contribution proposes to improve the optimization problem solved during the locally linear embedding (LLE) by adding a l2 norm regularization term in the minimization problem. A comprehensive analysis of the underlying hypotheses and some representative examples of the benefit of this algorithm are provided. Numerical experiments validate the approach on synthetic and real datasets through the comparison with state of the art algorithms.

    The real local-linearity preserving embedding algorithm enhanced the quality of the characterization of the manifold while increasing the computational cost. The paper is well structured and well written. Here are some propositions to improve the quality of the paper: - A small paragraph explaining the proposed improvements with respect to some of models proposed by the author could be helpful, e.g. the models described in the NIPS '05 and '06 papers. - On page 3, the sentence "Generally, LLE works well if the neighbor sets are well determined" is not clear. What is a well determined subset? - The equation before equation (10) is not fully described, since $\theta_j$ and $\theta_j^(i)$ are not detailed.

    As a side remark, in other publications, the LTSA algorithm seems to obtain better results on the noisy swiss roll hole. Why does the embedding found by LTSA on Figure 3 is so different from the generating parameter?

    Please also take note of these typos: - The section and subsection titles are not capitalized - on page 4, "Figure1" - one page 6, "...] ,i.e., ..." - on page 9, "Nearest Neighbor(NN)" - NEML acronym is not given, nor referenced.

    Published in
    Ongoing discussion
  • This contribution investigates the relation between cortical stimulations and the evoked responses they induce. The authors suggest to infer the stimulation parameters required to induce a given evoked response applying a support vector regression. A thorough analysis of the recordings acquired with two patients is conducted and the results are convincing.

    In this extended version, the data analysis section is more complete and more precise than the conference paper, thus dissipating concerns regarding the dataset preprocessing and the evaluation methodology.

    A possible suggestion for further work is to formalize the relation between the neural connectivity and the perturbation of the evoked responses. Is it possible to obtain a spatial map summarizing the influence of each stimulation site?

    Please take note of that clarity could be improved for the first sentence of the last paragraph on page 9 : "A limitation of the study is that the polarity of the stimulation pulse (...).".

    Published in
    Ongoing discussion