<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="https://hjcho-lab.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://hjcho-lab.github.io/" rel="alternate" type="text/html" /><updated>2022-03-21T03:51:35+00:00</updated><id>https://hjcho-lab.github.io/feed.xml</id><title type="html">Hailing frequencies open, Captain</title><author><name>Hyunjin Cho</name></author><entry><title type="html">전기전자공학 실험지시서 Lecture 05</title><link href="https://hjcho-lab.github.io/EE_fundamental_exp05/" rel="alternate" type="text/html" title="전기전자공학 실험지시서 Lecture 05" /><published>2021-11-11T00:00:00+00:00</published><updated>2021-11-11T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/EE_fundamental_exp05</id><content type="html" xml:base="https://hjcho-lab.github.io/EE_fundamental_exp05/">&lt;center&gt;  &lt;font size=&quot;6&quot;&gt; &lt;strong&gt;실 험 지 시 서&lt;/strong&gt; &lt;/font&gt; &lt;/center&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;주제: 다이오드 관련 실험 수행&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;참고자료&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;해사 실험교재&lt;/code&gt; 실험 9(기초이론), 10(반파 및 전파 정류회로), 11(클리핑, 클램핑 회로 설계)&lt;/li&gt;
  &lt;li&gt;ㅇㅇㅇ&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;h1 id=&quot;이론-설명-주요내용&quot;&gt;이론 설명 주요내용&lt;/h1&gt;
&lt;ul&gt;
  &lt;li&gt;다미오드 기본적인 내용
    &lt;ul&gt;
      &lt;li&gt;어떤 역할을 하는 소자인지&lt;/li&gt;
      &lt;li&gt;어떤 모델로 표현할 수 있는지, 실험교재 p117&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;https://tinyurl.com/ydt2b2e3 회로를 통해 계속 업데이트합시다!&lt;/li&gt;
  &lt;li&gt;한편 시뮬레이터상에서 구현된 다이오드의 모델이 어떤 것인지 확인을 할 필요가 있음 - 확인하는 방법 및 회로의 구성은?
    &lt;ul&gt;
      &lt;li&gt;실험 교재에 나오는 기본적인 회로 구성 후 세부 파라미터 설정 관련: 다이오드의 경우 많은 옵션이 있습니다. 번호는 모델번호를 나타냅니다. 모델번호에 따른 세부적인 스펙은 https://www.alldatasheet.com/ 에서 찾을 수 있습니다. 우리는 가장 기본적인 모델을 사용할 예정입니다.&lt;/li&gt;
      &lt;li&gt;교류전압 - 다이오드만 연결된 회로에서 다이오드에서 관찰되는 전압과 전류의 그래프를 확인함 - 전류 그래프가 조금 늦게 올라옴&lt;/li&gt;
      &lt;li&gt;교류전압 - 다이오드 - 저항이 연결된 회로에서 …&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;반파 및 전파 정류회로 관련&lt;/strong&gt; : 시뮬레이터에서의 다이오드에 대한 특성이 확인이 되었다면, 반파 및 전파 정류회로에 대한 설명을 하고 이후 LPF 설명과 연관하여 직류전압 추출하는 것을 설명할 수 있음.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;클리핑, 클램핑 회로 관련&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;예제) 전자기타 디스톨션
        &lt;ul&gt;
          &lt;li&gt;https://m.blog.naver.com/yogoho210/221082881085 (데모 유튜브 포함)&lt;/li&gt;
          &lt;li&gt;주관적 미디시점 - 일렉기타같은 사운드를 아주 간단하게 만드는 방법이 있다고? by 주관적 미디시점 00초 ~ 53초
            &lt;ul&gt;
              &lt;li&gt;그래프를 통해 파형을 보여주고 있음&lt;/li&gt;
            &lt;/ul&gt;
          &lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;AC/DC 노래 한곳: &lt;a href=&quot;https://www.youtube.com/watch?v=AW3-SuBZRt0&quot;&gt;Iron Man ‘Back in Black’ by Old Player&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;h1 id=&quot;회로-구성-falstad&quot;&gt;회로 구성 @falstad&lt;/h1&gt;
&lt;ul&gt;
  &lt;li&gt;시뮬레이터에서의 다이오드 모델을 확인하는 방법&lt;/li&gt;
  &lt;li&gt;위-아래 모두 클리핑 되는 회로 만들어보기, 그림 11-6&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Hyunjin Cho</name></author><category term="text" /><category term="experiment" /><category term="NAEE" /><summary type="html">실 험 지 시 서</summary></entry><entry><title type="html">SBL code analysis</title><link href="https://hjcho-lab.github.io/SBLcode_example/" rel="alternate" type="text/html" title="SBL code analysis" /><published>2021-11-01T00:00:00+00:00</published><updated>2021-11-01T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/SBLcode_example</id><content type="html" xml:base="https://hjcho-lab.github.io/SBLcode_example/">&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;BASIS	= exp(-distSquared(X,C)/(basisWidth^2));&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;distSquared&lt;/code&gt; 함수는 $(x-x_m)^2$ 을 수행&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-matalb&quot;&gt;D2 = (sum((X.^2), 2) * ones(1,ny)) + (ones(nx, 1) * sum((Y.^2),2)') - 2*X*Y';
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;consider models which are a linearly-weighted 
\(\phi_m(x)=\exp \left( -\frac{ ||x-x_m||^2 }{r^2} \right)\)&lt;/p&gt;</content><author><name>Hyunjin Cho</name></author><category term="text" /><summary type="html">BASIS = exp(-distSquared(X,C)/(basisWidth^2));</summary></entry><entry><title type="html">전기전자공학 실험지시서 Lecture 00 - math equation</title><link href="https://hjcho-lab.github.io/EE_fundamental_exp00/" rel="alternate" type="text/html" title="전기전자공학 실험지시서 Lecture 00 - math equation" /><published>2021-10-28T00:00:00+00:00</published><updated>2021-10-28T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/EE_fundamental_exp00</id><content type="html" xml:base="https://hjcho-lab.github.io/EE_fundamental_exp00/">&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;#youtube-lecture-playlist&quot;&gt;Youtube lecture playlist&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#section-31-subp97sub&quot;&gt;section 3.1, &lt;sub&gt;p97&lt;/sub&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;center&gt;  &lt;font size=&quot;6&quot;&gt; &lt;strong&gt;실 험 지 시 서&lt;/strong&gt; &lt;/font&gt; &lt;/center&gt;

&lt;h1 id=&quot;youtube-lecture-playlist&quot;&gt;Youtube lecture playlist&lt;/h1&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;h1 id=&quot;section-31-p97&quot;&gt;section 3.1, &lt;sub&gt;p97&lt;/sub&gt;&lt;/h1&gt;

&lt;p&gt;This is test inline equation, \(x=Fa\).&lt;/p&gt;

&lt;p&gt;A compressible signal $\boldsymbol{x} \in \mathbb{R}^n$ may be written as a sparse vector $\boldsymbol{s} \in \mathbb{R}^n$ in a transform basis $\boldsymbol{\Psi} \in \mathbb{R}^{n*n}$:&lt;/p&gt;

\[\boldsymbol{x} = \boldsymbol{\Psi} \boldsymbol{s} \tag{3.1}\]

&lt;p&gt;Specifically, the vector $\boldsymbol{s}$ is called $K$-sparse in $\boldsymbol{\Psi}$ if there are exactly $K$ nonzero elements.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;책에서 서술된 것 처럼 Fourier, Wavelet TF는 $\boldsymbol{\Psi}$의 구체적인 예로 이해하여도 무관하다.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;With knowledge of the sparse vector $\boldsymbol{s}$ it is possible to reconstruct the signal $\boldsymbol{x}$ from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;eq.(3.1)&lt;/code&gt;. &lt;mark&gt;Thus, the goal of compressed sensing is to find the sparsest vector $\boldsymbol{s}$ that is consistent with the measurements $\boldsymbol{y}$.&lt;/mark&gt;&lt;/p&gt;

\[\begin{aligned}
\boldsymbol{y} &amp;amp; = \boldsymbol{C} \boldsymbol{x} \tag{3.3} \\
&amp;amp; = \boldsymbol{C} \boldsymbol{\Psi} \boldsymbol{s} \\
&amp;amp; = \boldsymbol{\Theta} \boldsymbol{s}
\end{aligned}\]

\[\lim_{x\to 0}{\frac{e^x-1}{2x}}
\overset{\left[\frac{0}{0}\right]}{\underset{\mathrm{H}}{=}}
\lim_{x\to 0}{\frac{e^x}{2}}={\frac{1}{2}}\]

&lt;p&gt;&lt;strong&gt;Recorving an audio signal from sparse measurements&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;language-matlab highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;% Generate signal, DCT of signal&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4096&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;             &lt;span class=&quot;c1&quot;&gt;% points in high resolution signal&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;t&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;linspace&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;cos&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;97&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;pi&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;t&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;cos&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;777&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;pi&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;t&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;% Randomly sample signal&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;p&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;128&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;              &lt;span class=&quot;c1&quot;&gt;% num. random samples, p=n/32&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;perm&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;round&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;rand&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;p&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;n&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;perm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;          &lt;span class=&quot;c1&quot;&gt;% compressed measurement&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;</content><author><name>Hyunjin Cho</name></author><category term="text" /><summary type="html">Youtube lecture playlist section 3.1, p97</summary></entry><entry><title type="html">전기전자공학 실험지시서 Lecture 00 - test</title><link href="https://hjcho-lab.github.io/post01_refsite/" rel="alternate" type="text/html" title="전기전자공학 실험지시서 Lecture 00 - test" /><published>2021-10-28T00:00:00+00:00</published><updated>2021-10-28T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/post01_refsite</id><content type="html" xml:base="https://hjcho-lab.github.io/post01_refsite/">&lt;h1 id=&quot;제목&quot;&gt;제목&lt;/h1&gt;

\[F=ma\]</content><author><name>Hyunjin Cho</name></author><category term="text" /><summary type="html">제목</summary></entry><entry><title type="html">CS outline</title><link href="https://hjcho-lab.github.io/CS_SBL/" rel="alternate" type="text/html" title="CS outline" /><published>2021-10-28T00:00:00+00:00</published><updated>2021-10-28T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/CS_SBL</id><content type="html" xml:base="https://hjcho-lab.github.io/CS_SBL/">&lt;h1 id=&quot;reference-lists&quot;&gt;Reference lists&lt;/h1&gt;
&lt;ul&gt;
  &lt;li&gt;textbook &amp;amp; youtube lecture lists by Steve Brunton
    &lt;ul&gt;
      &lt;li&gt;textbook: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0037]_T03_R16&lt;/code&gt; - Brunton, Steven L., and J. Nathan Kutz. Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press, 2019.&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/playlist?list=PLMrJAkhIeNNRHP5UA-gIimsXLQyHXxRty&quot;&gt;Youtube lecture series by Steve Brunton(Total 23)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;textbook: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R08&lt;/code&gt; - Eldar, Yonina C., and Gitta Kutyniok, eds. Compressed sensing: theory and applications. Cambridge university press, 2012.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;Compressive Sensing with matlab code
    &lt;ul&gt;
      &lt;li&gt;Michigan center for CS &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R02&lt;/code&gt;&lt;/li&gt;
      &lt;li&gt;matlab &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R01&lt;/code&gt; mathworks Cleve’s Corner&lt;/li&gt;
      &lt;li&gt;Colorado - Begineers Code for compressive sensing 09Y &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R07&lt;/code&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;SBL Mike Tipping &lt;a href=&quot;/assets/mike_tipping_SBL.pdf&quot;&gt;논문&lt;/a&gt; 및 코드
    &lt;ul&gt;
      &lt;li&gt;코드가 너무 어렵다;;;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;SBL DOA Peter 논문 및 코드
    &lt;ul&gt;
      &lt;li&gt;코드는 peter SBL 및 cvx 의 성능을 상호 비교하고 있음&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;신명인 작성 논문 및 코드 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T06_C02&lt;/code&gt; 그리고 참고자료
    &lt;ul&gt;
      &lt;li&gt;ㅇㅇㅇ&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;cs&quot;&gt;CS&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;model&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;자연에 존재하는 데이터는 적절한 coordinate system(i.e., $\boldsymbol{\Psi}$, such as DCT, FT etc)을 이용한다면 sparse representation이 가능함. &lt;sub&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0037]_T03_R16&lt;/code&gt;, p96&lt;/sub&gt;
\(\boldsymbol{x} = \boldsymbol{\Psi} \boldsymbol{s}\)&lt;/li&gt;
  &lt;li&gt;$x$ 는 원본신호이고 앞서 이야기하였지만 내재된 sparsity 특성에 의해서 transform basis에 의해 K-sparse vector로 표현될 수 있다.
한편 원본신호($\boldsymbol{x} \in \mathbb{R}^{n}$)는 measurement(혹은 sampling) matrix($\boldsymbol{C} \in \mathbb{R}^{p \times n}$)에 의해 측정신호($\boldsymbol{y} \in \mathbb{R}^{p}$)로 표현된다. 
\(\begin{aligned}
\boldsymbol{y} &amp;amp; = \boldsymbol{C} \boldsymbol{x} \\
&amp;amp; = \boldsymbol{C} \boldsymbol{\Psi} \boldsymbol{s} \\
&amp;amp; = \boldsymbol{\Theta} \boldsymbol{s}
\end{aligned}\)
    &lt;ul&gt;
      &lt;li&gt;sparse representation의 목적은 measurement $\boldsymbol{y}$로 부터 sparsest vector $\boldsymbol{s}$을 찾는 것이다.&lt;/li&gt;
      &lt;li&gt;혹은 measurement, $\boldsymbol{y}$로 부터 원본신호, $\boldsymbol{x}$을 복원하는 것이다.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;cs-in-action&quot;&gt;CS in action&lt;/h1&gt;

&lt;h2 id=&quot;matlab-l1-magic-example&quot;&gt;matlab &lt;strong&gt;L1-magic&lt;/strong&gt; example&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Michigan center for CS &lt;sub&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R02&lt;/code&gt;&lt;/sub&gt;
    &lt;ul&gt;
      &lt;li&gt;matrix $\boldsymbol{B} \in \mathbb{R}^{481 \times 481}$ 는 &lt;strong&gt;DFT 연산을 수행하는 matrix&lt;/strong&gt;&lt;/li&gt;
      &lt;li&gt;matrix $\boldsymbol{A} \in \mathbb{R}^{90 \times 481}$ 는 matrix $\boldsymbol{B}$ 중 일부 row을 랜덤하게 선택하는 &lt;strong&gt;measurement matrix로 랜덤하게 선택된 데이터 포인트에 대해 DFT 연산을 수행&lt;/strong&gt;, i.e., $\boldsymbol{y} = \boldsymbol{A} \boldsymbol{x}^{T}$&lt;/li&gt;
      &lt;li&gt;toy example의 핵심은 measurement $ \boldsymbol{y} $ 와 measurement matrix $ \boldsymbol{A} $ 로 부터 source signal $ \boldsymbol{x} $ 을 찾아내는 것&lt;/li&gt;
      &lt;li&gt;코드의 논리 진행 과정에서 &lt;a href=&quot;https://dsp.stackexchange.com/questions/54864/performing-dft-twice-on-an-image-why-am-i-getting-an-inverted-image&quot;&gt;twice DFT는 원신호로 돌아오게 된다는 것&lt;/a&gt;을 알고 있어야 함.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;matlab simple code&lt;sub&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R01&lt;/code&gt;&lt;/sub&gt; mathworks Cleve’s Corner
    &lt;ul&gt;
      &lt;li&gt;기본적으로 Michigan center for CS의 예제와 동일하나, L1-magic에 의한 연산 결과를 pinv에 의한 연산 결과와 비교하였음&lt;/li&gt;
      &lt;li&gt;L1-magic - 1-norm, pinv - 2-norm&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;cvx-convex-optimization-example-0832_t01_r07&quot;&gt;CVX, &lt;strong&gt;convex optimization&lt;/strong&gt; example &lt;sub&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T01_R07&lt;/code&gt;&lt;/sub&gt;&lt;/h2&gt;

&lt;h2 id=&quot;cs-with-bayesian-sbl-sparse-bayesian-learning-0832_t06_c02&quot;&gt;CS with Bayesian: SBL, Sparse bayesian learning &lt;sub&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[0832]_T06_C02&lt;/code&gt;&lt;/sub&gt;&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;SBL 제안, Mike Tipping: &lt;a href=&quot;http://www.miketipping.com/&quot;&gt;Web&lt;/a&gt; and Paper(2001)&lt;/li&gt;
  &lt;li&gt;SBL의 개념을 underwater acoustic 분야에 적용하여 연구를 시작한 것은 &lt;a href=&quot;http://noiselab.ucsd.edu/index.html&quot;&gt;UCSD Noise Lab, Peter&lt;/a&gt; 에 의해서 진행이 되었음. 주로 DOA estimation, Localization 분야에 적용&lt;/li&gt;
  &lt;li&gt;신명인은 SBL을 주파수 추정 분야에 적용&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SBL 알고리즘의 Hyper-parameter 업데이트 시 Mike는 EM-algorithms을 사용하고, Peter는 통계적인 방식을 사용하고 있음&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;center&gt; * * * &lt;/center&gt;
&lt;p&gt; &lt;/p&gt;

&lt;h1 id=&quot;regression-comparison-sbl-algorithms-in-depth&quot;&gt;regression comparison: SBL algorithms in depth&lt;/h1&gt;
&lt;p&gt;SBL tutorial은 Mike Tipping lecture note &lt;a href=&quot;/assets/Lnote1_MLSS_Tipping1.pdf&quot;&gt;1&lt;/a&gt;, 2, 3, 4을 참고하였다. lecture note의 논리전개는 estimation parameter(혹은 regression - model prediction)을 위하여 1) classic과 probabilistic approach 을 비교하고 2) probabilistic approach에서 condition(i.e., sparcity)을 추가하여 Sparse Bayesian Learning 알고리즘을 유도한다.&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Wka3helmjRg&amp;amp;t=2225s&quot;&gt;youtube SBL useful outline introduction&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;model-for-classic-approach&quot;&gt;model for CLASSIC approach&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;given model&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;data set comprising $N=15$ samples generated from the function $y=sin(x)$ with Gaussian noise of $\sigma^2=0.2$.
    &lt;ul&gt;
      &lt;li&gt;model this data with some parameterised function $y(x;\boldsymbol{w})$&lt;/li&gt;
      &lt;li&gt;linearly-weighted sum of $M$ fixed basis function $\phi _m(x)$ (&lt;a href=&quot;https://keepmind.net/%EA%B8%B0%EA%B3%84%ED%95%99%EC%8A%B5-radial-basis-function/&quot;&gt;RBF, radial basis function&lt;/a&gt;)
\(y(x;\boldsymbol{w}) = \sum_{m=1}^{M} w_{m} \phi _{m}(x)\)&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt; is model prediction - model parameter prediction&lt;/p&gt;

&lt;h2 id=&quot;classic-approach-1-least-squares-approach&quot;&gt;CLASSIC approach #1: Least-squares approach&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Criterion&lt;/strong&gt;: minimising the error measure
\(E_D(\boldsymbol{w}) = \frac{1}{2} \sum_{n=1}^{N}  \left[ t_n - \sum_{m=1}^{M} w_{m} \phi _{m}(x_n) \right]^2\)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;solution&lt;/strong&gt;
\(\boldsymbol{w}_{LS} = \left( \Phi^T \Phi  \right)^{-1} \Phi^T \boldsymbol{t}\)&lt;/p&gt;

&lt;p&gt;disadvantage: over-fitting&lt;/p&gt;

&lt;h2 id=&quot;classic-approach-2-pls-penalized-least-squares&quot;&gt;CLASSIC approach #2: PLS, penalized least-squares&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;idea or motivation&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;we typically prefer smoother function, which typically have smaller weights $\boldsymbol{w}$&lt;/li&gt;
  &lt;li&gt;complexity control - Regularisation (over-fitting problem)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Criterion&lt;/strong&gt;: minimising the error with weight penalty
So, add a weight penalty term to the error function so that,
\(E(\boldsymbol{w}) = E_D(\boldsymbol{w}) + \color{red} \lambda \color{a} E_W(\boldsymbol{w})\)&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;where standard choice of weight penalty is the squared-weight penalty, $E_W(\boldsymbol{w}) = \frac{1}{2} \sum_{m=1}^{M} w_{m}^{2}$&lt;/li&gt;
  &lt;li&gt;&lt;span style=&quot;color:red&quot;&gt; &lt;em&gt;the hyperparameter&lt;/em&gt; &lt;/span&gt; $ \color{red} \lambda $ balances the trade-off between $E_D(\boldsymbol{w})$ and $E_W(\boldsymbol{w})$&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;solution&lt;/strong&gt;
\(\boldsymbol{w}_{PLS} = \left( \Phi^T \Phi + \lambda I \right)^{-1} \Phi^T \boldsymbol{t}\)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2 id=&quot;model-for-probalistic-approach&quot;&gt;model for probalistic approach&lt;/h2&gt;
&lt;blockquote&gt;
  &lt;p&gt;probalistic approach 라는 것이 곧 Bayesian approach을 의미하는 것은 아니다. also refer to Kay estimation text.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;\(t_n = y(x_n;\boldsymbol{w}) + \epsilon _n\)&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;where $\epsilon _n$ is noise with Gaussian distribution, $N(0, \sigma ^2)$:&lt;/li&gt;
  &lt;li&gt;
    &lt;table&gt;
      &lt;tbody&gt;
        &lt;tr&gt;
          &lt;td&gt;$p(t_n&lt;/td&gt;
          &lt;td&gt;\boldsymbol{w}, \sigma^2)$ ~ $N(y(x_n; \boldsymbol{w}), \sigma ^2)$&lt;/td&gt;
        &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the likelihood of the data set is
\(\color{green} p(\boldsymbol{t}|\boldsymbol{w}, \sigma ^2) \color{a} = \prod_{n=1}^{N} p(t_n | \boldsymbol{w}, \sigma^2)\)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt; is model prediction - model parameter prediction&lt;/p&gt;

&lt;h2 id=&quot;probabilistic-approach-1-maximum-likelihood&quot;&gt;Probabilistic approach #1: Maximum Likelihood&lt;/h2&gt;
&lt;p&gt;estimate for $\boldsymbol{w}$ that maximised the $\color{green} p(\boldsymbol{t}|\boldsymbol{w}, \sigma ^2)$
this is identical to the least-squares solution&lt;/p&gt;

&lt;h2 id=&quot;probabilistic-approach-2-map-maximum-a-posteriori&quot;&gt;Probabilistic approach #2: MAP, maximum a posteriori&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;idea or motivation&lt;/strong&gt;
similar with previous PLS - regularisation weight penalty, we now control the complexity of the model via a prior distribution which expresses our ‘degree of belief’ over values that $\boldsymbol{w}$ might take:
\(p(\boldsymbol{w}|\alpha) = \prod_{m=1}^{M} \sqrt{\frac{\alpha}{2\pi}} \exp \left( -\frac{1}{2} \alpha w_{m}^{2} \right)\)&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;zero-mean Guassian prior, independent for each weight $w_m$&lt;/li&gt;
  &lt;li&gt;common &lt;span style=&quot;color:green&quot;&gt; &lt;em&gt;inverse variance hyperparameter&lt;/em&gt; &lt;/span&gt; $\alpha$&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bayes rule&lt;/strong&gt;
with more familiar notation
\(\begin{aligned}
p(\theta | x) &amp;amp; = \frac{ p(x | \theta) p(\theta) }{ p(x) } \\
p(\boldsymbol{w} | \boldsymbol{t}, \alpha, \sigma ^2) &amp;amp; = \frac{ p(\boldsymbol{t} | \boldsymbol{w}, \sigma ^2) p(\boldsymbol{w} | \alpha) }{ p(\boldsymbol{t} | \alpha, \sigma ^2) }
\end{aligned}\)&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;so, the posterior is Gaussian: $ p(\boldsymbol{w} | \boldsymbol{t}, \alpha, \sigma ^2) = N(\mu, \Sigma) $ with
$\mu = $
$\Sigma = $&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bayesian learning&lt;/strong&gt; is about the better estimation, &lt;a href=&quot;https://youtu.be/I4dkEALQv34&quot;&gt;REF youtube video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;marginalisation&lt;/strong&gt; - the distinguishing element of Bayesian methods is marginalisation: attempt to integrate out all ‘unisance’ variables&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;making prediction&lt;/strong&gt; - learned from the training values $\boldsymbol{t}$, how we make a prediction for data $t_&lt;em&gt;$ given a new input datum $x_&lt;/em&gt;$&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://youtu.be/BtwL7YBTVk0&quot;&gt;reference youtube&lt;/a&gt; by KU DSBA lab 해설&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Hyunjin Cho</name></author><category term="text" /><summary type="html">Reference lists textbook &amp;amp; youtube lecture lists by Steve Brunton textbook: [0037]_T03_R16 - Brunton, Steven L., and J. Nathan Kutz. Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press, 2019. Youtube lecture series by Steve Brunton(Total 23)</summary></entry><entry><title type="html">GitKraken 사용기 - 익숙해지기</title><link href="https://hjcho-lab.github.io/GitKraken/" rel="alternate" type="text/html" title="GitKraken 사용기 - 익숙해지기" /><published>2021-10-28T00:00:00+00:00</published><updated>2021-10-28T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/GitKraken</id><content type="html" xml:base="https://hjcho-lab.github.io/GitKraken/">&lt;h1 id=&quot;gitkraken&quot;&gt;GitKraken&lt;/h1&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;PC와 MAC에서의 GitKraken profile 차이&lt;/strong&gt; 두 대의 머신 모두 Github hihjcho@webmail.korea.ac.kr 계정(account)으로 로그인하였는데, 각 머신에서 사용하는 profile이 왜 통일되지 않는가?
    &lt;ul&gt;
      &lt;li&gt;profile과 account는 서로 다른 의미임을 생각해야 한다.&lt;/li&gt;
      &lt;li&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;./img/profile_PC.png&quot; alt=&quot;GitKraken profile in PC&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;./img/profile_mac.png&quot; alt=&quot;GitKraken profile in PC&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;commit history에서도 확인할 수 있듯이 엉망이고, 꼬여버렸다. 어떻게 매듭을 풀 수 있을까?&lt;/li&gt;
  &lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Push 수행 시 에러 발생으로 merge 이후에 push을 수행함
&lt;img src=&quot;./img/push_error_merge.png&quot; alt=&quot;Push error&quot; /&gt;&lt;/p&gt;

&lt;p&gt;# 
This is test inline equation, \(x=Fa\).&lt;/p&gt;</content><author><name>Hyunjin Cho</name></author><category term="text" /><summary type="html">GitKraken PC와 MAC에서의 GitKraken profile 차이 두 대의 머신 모두 Github hihjcho@webmail.korea.ac.kr 계정(account)으로 로그인하였는데, 각 머신에서 사용하는 profile이 왜 통일되지 않는가? profile과 account는 서로 다른 의미임을 생각해야 한다.</summary></entry><entry><title type="html">Git, Github의 활용 상황</title><link href="https://hjcho-lab.github.io/git_use/" rel="alternate" type="text/html" title="Git, Github의 활용 상황" /><published>2021-10-23T00:00:00+00:00</published><updated>2021-10-23T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/git_use</id><content type="html" xml:base="https://hjcho-lab.github.io/git_use/">&lt;p&gt;git
github
gitKraken&lt;/p&gt;

&lt;h1 id=&quot;언제-사용-할-수-있나&quot;&gt;언제 사용 할 수 있나?&lt;/h1&gt;
&lt;p&gt;제법 덩치가 커진 프로젝트 구현 중에 멤버들 사이의 소통은 비정형화된 수단을 통하여 수시로 발생한다. 1) 프로젝트 파일 용량은 덩치가 커져서 매번 메일을 통해 주고 받는 것이 부담스러울 뿐만 아니라, 2) 업데이트 이력에 대한 기록을 통해 체계적(?)인 관리의 필요성도 느끼고 있다.&lt;/p&gt;</content><author><name>Hyunjin Cho</name></author><summary type="html">git github gitKraken</summary></entry><entry><title type="html">Github pages 시작: 기본 설정을 위한 참고자료 및 more reference</title><link href="https://hjcho-lab.github.io/github_pages/" rel="alternate" type="text/html" title="Github pages 시작: 기본 설정을 위한 참고자료 및 more reference" /><published>2021-10-23T00:00:00+00:00</published><updated>2021-10-23T00:00:00+00:00</updated><id>https://hjcho-lab.github.io/github_pages</id><content type="html" xml:base="https://hjcho-lab.github.io/github_pages/">&lt;h1 id=&quot;github-pages-이용을-위한-길잡이&quot;&gt;Github pages 이용을 위한 길잡이&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;기본 설정을 위한 참고자료 in short&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;길잡이가 되는 동영상 - 일단 실체부터 만들고 시작하자.
깃헙(GitHub) 블로그 10분안에 완성하기, 테디노트&lt;/p&gt;
&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/ACzFIAOsfpM&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;minimal-mistakes(m.m.) Fork 이후 환경설정에 대한 정리&lt;/p&gt;
&lt;ul class=&quot;task-list&quot;&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;repo name 변경(as hihjcho.github.io)&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_config.yml&lt;/code&gt; 수정 : should edit url address&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;posting 하기 위해서는 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_posts&lt;/code&gt; 폴더 생성 후 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;*.md&lt;/code&gt; 파일 생성&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;m.m.에 대한 사실 몇 가지&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;gem-based theme&lt;/li&gt;
  &lt;li&gt;Jekyll&lt;/li&gt;
  &lt;li&gt;ruby&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;구체적인-환경설정&quot;&gt;구체적인 환경설정&lt;/h1&gt;
&lt;p&gt;참고: 정리가 재밌는 개발자의 &lt;a href=&quot;https://eona1301.github.io/&quot;&gt;블로그&lt;/a&gt; 및 &lt;a href=&quot;https://github.com/eona1301/eona1301.github.io&quot;&gt;Github pages&lt;/a&gt;  - minimal-mistakes 거의 모든 세팅에 대한 정리 &lt;br /&gt;&lt;/p&gt;

&lt;p&gt;다음은 참고한 블로그 포스팅의 주요 내용을 포스팅별로 정리한 것인데, 추후에 포스팅과 무관하게 재정리할 필요있음 &lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://eona1301.github.io/github_blog/GithubBlog-Start/&quot;&gt;[Github Blog] Jekyll - minimal mistakes 시작하기&lt;/a&gt;&lt;/p&gt;
&lt;ul class=&quot;task-list&quot;&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Ruby 설치&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;Jekyll 설치&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;minimal-mistakes Fork&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; /&gt;변경된 사항 로컬에서 확인하는 방법
    &lt;ul class=&quot;task-list&quot;&gt;
      &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; /&gt;또 다른 &lt;a href=&quot;https://ryureka.github.io/blog/GitHub-%EB%B8%94%EB%A1%9C%EA%B7%B8-%EB%A7%8C%EB%93%A4%EA%B8%B0(2)-%EA%B0%9C%EB%B0%9C-%ED%99%98%EA%B2%BD-%EA%B5%AC%EC%B6%95%ED%95%98%EA%B8%B0/&quot;&gt;참고 블로그&lt;/a&gt;: cmd창에서 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bundle exec jekyll serve&lt;/code&gt; 실행, 브라우저에서 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:4000&lt;/code&gt;입력&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;https://eona1301.github.io/github_blog/GithubBlog-config/&quot;&gt;[Github Blog] minimal mistakes - config.yml 수정하기&lt;/a&gt;&lt;/p&gt;
&lt;ul class=&quot;task-list&quot;&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;기본 정보 입력 - 블로그 전반에 적용되는 사항&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;메인 프로필 영역 입력 - 블로그 좌측 프로필 영역&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;하단 프로필 영역 - 블로그 하단 프로필 영역&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; checked=&quot;checked&quot; /&gt;첫 화면 게시물 개수&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; /&gt;dafaults 설정(?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;https://eona1301.github.io/github_blog/GithubBlog-Category/&quot;&gt;[Github Blog] minimal mistakes - Category 세팅하기&lt;/a&gt;&lt;/p&gt;
&lt;ul class=&quot;task-list&quot;&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; /&gt;‘navigation.yml’ 수정 - 상단 카테고리 설정&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; /&gt;지정된 카테고리 중 원하는 페이지 수정&lt;/li&gt;
  &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; class=&quot;task-list-item-checkbox&quot; disabled=&quot;disabled&quot; /&gt;TOC 다루기&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;본문 영역 및 글자 크기&lt;/p&gt;

&lt;p&gt;댓글 기능 추가하기&lt;/p&gt;

&lt;p&gt;방문자 통계하기&lt;/p&gt;

&lt;p&gt;검색창 노출시키기&lt;/p&gt;

&lt;h2 id=&quot;math-equation-represeantation-수식-사용을-위한-환경설정&quot;&gt;math equation represeantation, 수식 사용을 위한 환경설정:&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;참고 방법 #1: md processor - kramdown 설정, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mathjax_support.html&lt;/code&gt; 생성 등의 과정으로
    &lt;ul&gt;
      &lt;li&gt;1: https://chaelin0722.github.io/blog/mathjex/&lt;/li&gt;
      &lt;li&gt;2: https://mkkim85.github.io/blog-apply-mathjax-to-jekyll-and-github-pages/&lt;/li&gt;
      &lt;li&gt;3: http://benlansdell.github.io/computing/mathjax/&lt;/li&gt;
      &lt;li&gt;결과: 인라인 수식은 보이나, 수식 블록은 보이지 않음.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;

- 참고 방법 #2: 
  - http://www.ktug.org/xe/index.php?mid=KTUG_open_board&amp;amp;document_srl=248288 

  &amp;gt; 20.8월에 kramdown이 버전이 바뀌면서 올해 초에 발표했던 MathJax 적용이 망가졌습니다
  
  - github와 LateX https://eeeuns.github.io/2020/12/10/githubblog/
  - 결과: 인라인 수식은 안보이고, 수식 블록은 보임 -.-

&lt;p&gt;

- 참고 방법 #3: 시도는 해보지 않았음
  - https://ansohxxn.github.io/blog/math-equation/

- 참고 방법 #4: 역시 본진을 털어야 한다: https://www.mathjax.org/
  - https://baeseongsu.github.io/posts/apply-mathjax-to-jekyll-blog/ math delimiter 을 조정하여야 한다.

- 결론
  - 참고방법 #2에 따라 `/includes/Mathjax.html` 생성, `_layouts/default.html` 수정
  - 참고방법 #4에 따라 math delimiter 추가 


- inline equation: `$...$` syntax
- equation block: `$$...$$` syntax
- specific TeX 문법



## youtube link 예쁘게 보이기 테스트
[Spring - Blender Open Movie](https://www.youtube.com/watch?v=WhWc3b3KhnY){:.glightbox}
https://www.youtube.com/watch?v=WhWc3b3KhnY


블로그 카테고리 클릭 시 해당되는 포스트만 보이기 - 카테고리안에서도 세부적으로 sub-category 좌측 메뉴판에 전시

전체 포스팅을 분류하는 기능도 존재하기 - by year, by topis


# more Github pages REFERENCE
- 정리가 재밌는 개발자 [블로그](https://eona1301.github.io/), [Github pages](https://github.com/eona1301/eona1301.github.io)
- 이수환님 블로그, https://suhwan.dev/2017/06/23/jekyll-project-structure/
- Animadversio WashU 중국 유학생, https://animadversio.github.io/
- 김태곤 교수님, https://github.com/taegon
- [류정관 개발자](https://ryureka.github.io/)
  - 로컬에서 개발 환경 구축하기
- https://ansohxxn.github.io/blog/math-equation/ 카테고리 보이기, 페이지 설정 등 세부 설정 조정방법
&lt;/p&gt;&lt;/p&gt;</content><author><name>Hyunjin Cho</name></author><summary type="html">Github pages 이용을 위한 길잡이</summary></entry></feed>