此文章记录一些机器学习的相关知识点、公式及书写方法
y = w x + b LARGE {y=wx+b} y=wx+b
σ ( x ) = 1 1 + e − x LARGE {sigma(x) = {1 above{1pt} 1+e^{-x}}} σ(x)=1+e−x1
σ ( x ) = 1 1 + e − ( w x + b ) LARGE {sigma(x) = {1 above{1pt} 1+e^{-(wx+b)}}} σ(x)=1+e−(wx+b)1
G i n i _ i n d e x ( D , a ) = ∑ v = 1 V D v D G i n i ( D v ) LARGE Gini_index(D, a) = displaystyle sum_{v=1}^V{D^vabove{1pt}D}Gini(D^v) Gini_index(D,a)=v=1∑VDDvGini(Dv)
G i n i ( D ) = 1 − ∑ k = 1 ∣ y ∣ P k 2 LARGE Gini(D)=1-displaystyle sum_{k=1}^{|y|}P_k^2 Gini(D)=1−k=1∑∣y∣Pk2
P ( A B ) = P ( B ∣ A i ) ∗ P ( A i ) LARGE P(AB) = P(B|A_i)*P(A_i) P(AB)=P(B∣Ai)∗P(Ai)
ps:
P ( B ) = ∑ k = 1 n P ( B ∣ A k ) ∗ P ( A k ) LARGE P(B) = displaystyle sum_{k=1}^{n}P(B|A_k)*P(A_k) P(B)=k=1∑nP(B∣Ak)∗P(Ak)
ps:
P ( A i ∣ B ) = P ( A B ) P ( B ) = P ( B ∣ A i ) ∗ P ( A i ) ∑ k = 1 n P ( B ∣ A k ) ∗ P ( A k ) LARGE P(A_i|B) = {P(AB) above{1pt} P(B)} = {P(B|A_i)*P(A_i) above{1pt} displaystyle sum_{k=1}^{n}P(B|A_k)*P(A_k)} P(Ai∣B)=P(B)P(AB)=k=1∑nP(B∣Ak)∗P(Ak)P(B∣Ai)∗P(Ai)
ps:
设: A = [ a 1 , a 2 , . . . a n ] 设:LARGE A=[a_1,a_2,...a_n] 设:A=[a1,a2,...an]
则: ∣ A ∣ = a 1 2 + a 2 2 + . . . + a n 2 = ∑ i = 1 n a i 2 则:LARGE |A| = sqrt{smash[]{a_1^2+a_2^2+...+a_n^2}} = sqrt{smash[]{ displaystyle sum_{i=1}^{n}a_i^2}} 则:∣A∣=a12+a22+...+an2 =i=1∑nai2
设: A = [ a 1 , a 2 , . . . a n ] , B = [ b 1 , b 2 . . . b n ] 设:Large A=[a_1,a_2,...a_n],B=[b_1,b_n] 设:A=[a1,a2,...an],B=[b1,b2...bn]
则: A ⋅ B = ∣ A ∣ ∣ B ∣ cos θ = a 1 ∗ b 1 + a 2 ∗ b 2 + . . . + a n ∗ b n = ∑ i = 1 n a i ∗ b i 则:Large A cdot B = |A||B|costheta = a_1*b_1+a_2*b_2+...+a_n*b_n = displaystyle sum_{i=1}^{n}a_i*b_i 则:A⋅B=∣A∣∣B∣cosθ=a1∗b1+a2∗b2+...+an∗bn=i=1∑nai∗bi
设: A = [ a 1 , a 2 , . . . a n ] , B = [ b 1 , b 2 . . . b n ] 设:Large A=[a_1,a_2,...a_n],B=[b_1,b_n] 设:A=[a1,a2,...an],B=[b1,b2...bn]
则: s i m i l a r i t y = cos ( θ ) = 向量的内积 向量模的乘积 = 向量的内积 向量 L 2 范数的乘积 = A ⋅ B ∣ A ∣ ⋅ ∣ B ∣ = A ∣ A ∣ ⋅ B ∣ B ∣ = ∑ i = 1 n a i ∗ b i ∑ i = 1 n a i 2 ∗ ∑ i = 1 n b i 2 则: similarity = cos(theta) = {向量的内积 above{1pt} 向量模的乘积} = {向量的内积 above{1pt} 向量L2范数的乘积} = {A cdot B above{1pt} |A|cdot|B|} = {A above{1pt} |A|} cdot {B above{1pt} |B|} = {displaystyle sum_{i=1}^{n}a_i*b_i above{1pt} sqrt{smash[]{ displaystyle sum_{i=1}^{n}a_i^2}} * sqrt{smash[]{ displaystyle sum_{i=1}^{n}b_i^2}}} 则:similarity=cos(θ)=向量模的乘积向量的内积=向量L2范数的乘积向量的内积=∣A∣⋅∣B∣A⋅B=∣A∣A⋅∣B∣B=i=1∑nai2 ∗i=1∑nbi2 i=1∑nai∗bi
PS:
P ( x 1 , x 2 , x 3 . . . x n ∣ θ ) = ∏ i = 1 n P ( x i ∣ θ ) LARGE P(x_1,x_2,_n|theta) = displaystyle prod_{i=1}^{n}P(x_i|theta) P(x1,x2,x3...xn∣θ)=i=1∏nP(xi∣θ)
ps:
如果随机变量X只取0和1两个值,并且相应的概率为:
P r ( X = 1 ) = p , P r ( X = 0 ) = 1 − p , 0 < p < 1 LARGE Pr(X=1)=p,Pr(X=0)=1-p,0<p<1 Pr(X=1)=p,Pr(X=0)=1−p,0<p<1
则称随机变量X服从参数为p的伯努利分布,X的概率函数可写为:
f ( x ∣ p ) = p x ( 1 − p ) 1 − x = { p x = 0 1 − p x = 1 0 x / = 0 , 1 LARGE f(x|p) = p^x(1-p)^{1-x}= begin{cases} p & x=0 \ 1-p & x=1 \ 0 & x mathrlap{,/}{ = } 0,1 end{cases} f(x∣p)=px(1−p)1−x=⎩ ⎨ ⎧p1−p0x=0x=1x/=0,1
令q=1一p的话,也可以写成下面这样:
f ( x ∣ p ) = { p x q 1 − x x = 0 , 1 0 x / = 0 , 1 LARGE f(x|p) = begin{cases} p^xq^{1-x} & x=0,1 \ 0 & x mathrlap{,/}{ = } 0,1 end{cases} f(x∣p)=⎩ ⎨ ⎧pxq1−x0x=0,1x/=0,1
ps:
定义:伯努利分布指的是对于随机变量X有, 参数为p(0<p<1),如果它分别以概率p和1-p取1和0为值。EX= p,DX=p(1-p)
什么样的事件遵循伯努利分布:任何我们只有一次实验和两个可能结果的事件都遵循伯努利分布【例如:抛硬币、猫狗分类】
某个事件发生的信息量可以定义成如下形式
F ( p ) = − log 2 p LARGE F(p) = -log_2p F(p)=−log2p
ps:
对概率系统 P P P 求熵 H H H 可定义为对系统 P P P 求信息量 f f f 的期望
H ( P ) : = E ( P f ) = ∑ i = 1 m p i ∗ f ( p i ) = ∑ i = 1 m p i ( − l o g 2 p i ) = − ∑ i = 1 m p i ∗ l o g 2 p i H(P): =E(P_f) = displaystyle sum_{i=1}^{m} p_i*f(p_i) = displaystyle sum_{i=1}^{m} p_i(-log_2p_i) = - displaystyle sum_{i=1}^{m} p_i*log_2p_i H(P):=E(Pf)=i=1∑mpi∗f(pi)=i=1∑mpi(−log2pi)=−i=1∑mpi∗log2pi
系统熵的求解过程简单来说,就是把系统里面所有 可能发生事件的信息量 − l o g 2 p i -log_2p_i −log2pi 求出来然后和这个 事件发生的概率 p i p_i pi 相乘,最后把这些 结果 − l o g 2 p i ∗ p i -log_2p_i*p_i −log2pi∗pi 相加,得到的就是这个系统的熵
ps:
相对熵用于计算两个系统之间的熵的差距,公式如下:
D K L ( P ∣ ∣ Q ) : = ∑ i = 1 m p i ∗ ( f Q ( q i ) − f P ( p i ) ) = ∑ i = 1 m p i ∗ ( ( − log 2 q i ) − ( − log 2 p i ) ) = ∑ i = 1 m p i ∗ ( − log 2 q i ) − ∑ i = 1 m p i ∗ ( − log 2 p i ) = H ( P , Q ) − H ( P ) D_{KL} (P||Q): = displaystyle sum_{i=1}^{m} p_i*(f_Q(q_i) - f_P(p_i)) = displaystyle sum_{i=1}^{m} p_i*((-log_2q_i) - (-log_2p_i)) = displaystyle sum_{i=1}^{m} p_i*(-log_2q_i) - displaystyle sum_{i=1}^{m} p_i*(-log_2p_i) = H(P,Q) - H(P) DKL(P∣∣Q):=i=1∑mpi∗(fQ(qi)−fP(pi))=i=1∑mpi∗((−log2qi)−(−log2pi))=i=1∑mpi∗(−log2qi)−i=1∑mpi∗(−log2pi)=H(P,Q)−H(P)
ps:
基本公式如下
H ( P , Q ) = ∑ i = 1 m x i ∗ ( − log 2 y i ) LARGE H(P,Q)=displaystyle sum_{i=1}^{m} x_i*(-log_2y_i) H(P,Q)=i=1∑mxi∗(−log2yi)
考虑正反两面的情况后可以写成如下形式
H ( P , Q ) = − ( ∑ i = 1 n ( x i ∗ log 2 y i + ( 1 − x i ) ∗ log 2 ( 1 − y i ) ) ) Large H(P,Q)=-( displaystyle sum_{i=1}^{n} (x_i*log_2 y_i + (1-x_i)*log_2(1-y_i))) H(P,Q)=−(i=1∑n(xi∗log2yi+(1−xi)∗log2(1−yi)))
设 f ( x ) f(x) f(x) 在 x 0 x_0 x0 处有n阶导数,则有公式:
f ( x ) = f ( x 0 ) + f ′ ( x 0 ) 1 ! ( x − x 0 ) + f ′ ′ ( x 0 ) 2 ! ( x − x 0 ) 2 + . . . + f ( n ) ( x 0 ) n ! ( x − x 0 ) n + o [ ( x − x 0 ) n ] large f(x) = f(x_0) + {f'(x_0)above{1pt} 1!}(x-x_0) + {f''(x_0)above{1pt} 2!}(x-x_0)^2 + ...+ {f^{(n)}(x_0)above{1pt} n!}(x-x_0)^n + o[(x-x_0)^n] f(x)=f(x0)+1!f′(x0)(x−x0)+2!f′′(x0)(x−x0)2+...+n!f(n)(x0)(x−x0)n+o[(x−x0)n]
ps:
当 x 0 = 0 x_0=0 x0=0 时的 泰勒公式 就是 麦克劳林公式了,如下
f ( x ) = f ( 0 ) + f ′ ( 0 ) 1 ! x + f ′ ′ ( 0 ) 2 ! x 2 + . . . + f ( n ) ( 0 ) n ! x n + o ( x n ) Large f(x) = f(0) + {f'(0)above{1pt} 1!}x + {f''(0)above{1pt} 2!}x^2 + ... + {f^{(n)}(0)above{1pt} n!}x^n + o(x^n) f(x)=f(0)+1!f′(0)x+2!f′′(0)x2+...+n!f(n)(0)xn+o(xn)
参考视频
若随机变量 X X X 服从一个位置参数为 μ mu μ、尺度参数为 σ sigma σ 的概率分布,且其概率密度函数为:
f ( x ) = 1 2 π σ exp ( − ( x − μ ) 2 2 σ 2 ) Large f(x) = { 1above{1pt} sqrt{2pisigma}} exp (- {(x-mu)^2above{1pt}{2sigma^2}} ) f(x)=2πσ 1exp(−2σ2(x−μ)2)
则这个随机变量就称为正态随机变量,正态随机变量服从的分布就称为正态分布,记作 X ∼ N ( μ , σ 2 ) X thicksim N(mu,sigma^2) X∼N(μ,σ2) ,读作 X X X 服从 N ( μ , σ 2 ) N(mu, sigma^2) N(μ,σ2),或 X X X 服从正态分布。
当 μ = 0 , σ = 1 mu=0, sigma=1 μ=0,σ=1 时,正态分布就成为标准正态分布
f ( x ) = 1 2 π exp ( − x 2 2 σ 2 ) Large f(x) = { 1above{1pt} sqrt{2pi}} exp (- {x^2above{1pt}{2sigma^2}} ) f(x)=2π 1exp(−2σ2x2)
本文发布于:2024-02-01 00:40:04,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170671920632572.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |