商業支援:
JavaCV 使用電腦視覺領域研究人員常用函式庫的 JavaCPP 預設的包裝器(OpenCV、FFmpeg、libdc1394、FlyCapture、Spinnaker、OpenKinect、librealsense、CL PS3 Eye Driver、videoInput、ARToolKitPlus、flandmark、Leptonica 和 Tesseract)並提供實用程式類,使其功能更易於在Java 平台(包括Android)上使用。
JavaCV 還附帶硬體加速全螢幕影像顯示( CanvasFrame
和GLCanvasFrame
)、在多核心上並行執行程式碼的易於使用的方法( Parallel
)、使用者友好的相機和投影機幾何和色彩校準( GeometricCalibrator
、 ProCamGeometricCalibrator
、 ProCamColorCalibrator
)、特徵點偵測與配對( ObjectFinder
)、一組實作投影機相機系統直接影像對齊的類別(主要是GNImageAligner
、 ProjectiveTransformer
、 ProjectiveColorTransformer
、 ProCamTransformer
和ReflectanceInitializer
)、斑點分析套件( Blobs
)、以及JavaCV
類別中的其他功能。其中一些類別還具有 OpenCL 和 OpenGL 對應項,它們的名稱以CL
結尾或以GL
開頭,即: JavaCVCL
、 GLCanvasFrame
等。
若要了解如何使用該 API,由於目前缺乏文檔,請參閱下面的範例使用部分以及範例程序,其中包括兩個適用於 Android 的程序( FacePreview.java
和RecordActivity.java
),也可以在samples
目錄中找到。您可能還會發現參考 ProCamCalib 和 ProCamTracker 的原始程式碼以及從 OpenCV2 Cookbook 和相關 wiki 頁面移植的範例很有用。
請隨時通知我您對程式碼所做的任何更新或修復,以便我可以將它們整合到下一個版本中。謝謝你!如果您在使用該軟體時遇到任何問題,請隨時在郵件清單或論壇上提問!我確信它遠非完美......
包含 JAR 檔案的存檔可作為版本提供。二進位檔案包含適用於 Android、iOS、Linux、Mac OS X 和 Windows 的建置。特定子模組或平台的 JAR 檔案也可以從 Maven 中央儲存庫單獨取得。
若要手動安裝 JAR 文件,請按照下面的手動安裝部分中的說明進行操作。
我們也可以使用以下命令自動下載和安裝所有內容:
pom.xml
檔內) < dependency >
< groupId >org.bytedeco</ groupId >
< artifactId >javacv-platform</ artifactId >
< version >1.5.11</ version >
</ dependency >
build.gradle.kts
或build.gradle
檔案內) dependencies {
implementation( " org.bytedeco:javacv-platform:1.5.11 " )
}
project.clj
檔案內) :dependencies [
[org.bytedeco/javacv-platform " 1.5.11 " ]
]
build.sbt
檔案內) libraryDependencies += " org.bytedeco " % " javacv-platform " % " 1.5.11 "
這會下載所有平台的二進位文件,但要僅獲取一個平台的二進位文件,我們可以將javacpp.platform
系統屬性(透過-D
命令列選項)設定為android-arm
、 linux-x86_64
、 macosx-x86_64
、 windows-x86_64
等,請參考 JavaCPP Presets 的 README.md 檔案。 Gradle 使用者可以使用的另一個選項是 Gradle JavaCPP,同樣,Scala 使用者可以使用 SBT-JavaCV。
要使用JavaCV,您首先需要下載並安裝以下軟體:
此外,儘管並非總是必要的,JavaCV 的一些功能也依賴:
最後,請確保所有內容都具有相同的位數: 32 位元和 64 位元模組在任何情況下都不能混合。
只需將所有所需的 JAR 檔案( opencv*.jar
、 ffmpeg*.jar
等)以及javacpp.jar
和javacv.jar
放在類別路徑中的某個位置即可。以下是針對常見情況的一些更具體的說明:
NetBeans(Java SE 7 或更高版本):
Eclipse(Java SE 7 或更高版本):
Visual Studio Code(Java SE 7 或更高版本):
+
。IntelliJ IDEA(Android 7.0 或更高版本):
app/libs
子目錄中。+
”,然後選擇“2 檔案相依性”。libs
子目錄中選擇所有 JAR 檔案。android:extractNativeLibs="true"
之後,例如 OpenCV 和 FFmpeg 的包裝類別可以自動存取它們的所有 C/C++ API:
類別定義基本上是 C/C++ 中原始頭檔到 Java 的移植,我特意決定保留盡可能多的原始語法。例如,以下是一個嘗試載入映像檔、對其進行平滑並將其儲存回磁碟的方法:
import org . bytedeco . opencv . opencv_core .*;
import org . bytedeco . opencv . opencv_imgproc .*;
import static org . bytedeco . opencv . global . opencv_core .*;
import static org . bytedeco . opencv . global . opencv_imgproc .*;
import static org . bytedeco . opencv . global . opencv_imgcodecs .*;
public class Smoother {
public static void smooth ( String filename ) {
Mat image = imread ( filename );
if ( image != null ) {
GaussianBlur ( image , image , new Size ( 3 , 3 ), 0 );
imwrite ( filename , image );
}
}
}
JavaCV 也在 OpenCV 和 FFmpeg 之上提供了幫助程式類別和方法,以方便它們與 Java 平台的整合。這是一個小演示程序,演示了最常用的部分:
import java . io . File ;
import java . net . URL ;
import org . bytedeco . javacv .*;
import org . bytedeco . javacpp .*;
import org . bytedeco . javacpp . indexer .*;
import org . bytedeco . opencv . opencv_core .*;
import org . bytedeco . opencv . opencv_imgproc .*;
import org . bytedeco . opencv . opencv_calib3d .*;
import org . bytedeco . opencv . opencv_objdetect .*;
import static org . bytedeco . opencv . global . opencv_core .*;
import static org . bytedeco . opencv . global . opencv_imgproc .*;
import static org . bytedeco . opencv . global . opencv_calib3d .*;
import static org . bytedeco . opencv . global . opencv_objdetect .*;
public class Demo {
public static void main ( String [] args ) throws Exception {
String classifierName = null ;
if ( args . length > 0 ) {
classifierName = args [ 0 ];
} else {
URL url = new URL ( "https://raw.github.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_alt.xml" );
File file = Loader . cacheResource ( url );
classifierName = file . getAbsolutePath ();
}
// We can "cast" Pointer objects by instantiating a new object of the desired class.
CascadeClassifier classifier = new CascadeClassifier ( classifierName );
if ( classifier == null ) {
System . err . println ( "Error loading classifier file " " + classifierName + " " ." );
System . exit ( 1 );
}
// The available FrameGrabber classes include OpenCVFrameGrabber (opencv_videoio),
// DC1394FrameGrabber, FlyCapture2FrameGrabber, OpenKinectFrameGrabber, OpenKinect2FrameGrabber,
// RealSenseFrameGrabber, RealSense2FrameGrabber, PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
FrameGrabber grabber = FrameGrabber . createDefault ( 0 );
grabber . start ();
// CanvasFrame, FrameGrabber, and FrameRecorder use Frame objects to communicate image data.
// We need a FrameConverter to interface with other APIs (Android, Java 2D, JavaFX, Tesseract, OpenCV, etc).
OpenCVFrameConverter . ToMat converter = new OpenCVFrameConverter . ToMat ();
// FAQ about IplImage and Mat objects from OpenCV:
// - For custom raw processing of data, createBuffer() returns an NIO direct
// buffer wrapped around the memory pointed by imageData, and under Android we can
// also use that Buffer with Bitmap.copyPixelsFromBuffer() and copyPixelsToBuffer().
// - To get a BufferedImage from an IplImage, or vice versa, we can chain calls to
// Java2DFrameConverter and OpenCVFrameConverter, one after the other.
// - Java2DFrameConverter also has static copy() methods that we can use to transfer
// data more directly between BufferedImage and IplImage or Mat via Frame objects.
Mat grabbedImage = converter . convert ( grabber . grab ());
int height = grabbedImage . rows ();
int width = grabbedImage . cols ();
// Objects allocated with `new`, clone(), or a create*() factory method are automatically released
// by the garbage collector, but may still be explicitly released by calling deallocate().
// You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc. on objects allocated this way.
Mat grayImage = new Mat ( height , width , CV_8UC1 );
Mat rotatedImage = grabbedImage . clone ();
// The OpenCVFrameRecorder class simply uses the VideoWriter of opencv_videoio,
// but FFmpegFrameRecorder also exists as a more versatile alternative.
FrameRecorder recorder = FrameRecorder . createDefault ( "output.avi" , width , height );
recorder . start ();
// CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
// It can also switch into full-screen mode when called with a screenNumber.
// We should also specify the relative monitor/camera response for proper gamma correction.
CanvasFrame frame = new CanvasFrame ( "Some Title" , CanvasFrame . getDefaultGamma ()/ grabber . getGamma ());
// Let's create some random 3D rotation...
Mat randomR = new Mat ( 3 , 3 , CV_64FC1 ),
randomAxis = new Mat ( 3 , 1 , CV_64FC1 );
// We can easily and efficiently access the elements of matrices and images
// through an Indexer object with the set of get() and put() methods.
DoubleIndexer Ridx = randomR . createIndexer (),
axisIdx = randomAxis . createIndexer ();
axisIdx . put ( 0 , ( Math . random () - 0.5 ) / 4 ,
( Math . random () - 0.5 ) / 4 ,
( Math . random () - 0.5 ) / 4 );
Rodrigues ( randomAxis , randomR );
double f = ( width + height ) / 2.0 ; Ridx . put ( 0 , 2 , Ridx . get ( 0 , 2 ) * f );
Ridx . put ( 1 , 2 , Ridx . get ( 1 , 2 ) * f );
Ridx . put ( 2 , 0 , Ridx . get ( 2 , 0 ) / f ); Ridx . put ( 2 , 1 , Ridx . get ( 2 , 1 ) / f );
System . out . println ( Ridx );
// We can allocate native arrays using constructors taking an integer as argument.
Point hatPoints = new Point ( 3 );
while ( frame . isVisible () && ( grabbedImage = converter . convert ( grabber . grab ())) != null ) {
// Let's try to detect some faces! but we need a grayscale image...
cvtColor ( grabbedImage , grayImage , CV_BGR2GRAY );
RectVector faces = new RectVector ();
classifier . detectMultiScale ( grayImage , faces );
long total = faces . size ();
for ( long i = 0 ; i < total ; i ++) {
Rect r = faces . get ( i );
int x = r . x (), y = r . y (), w = r . width (), h = r . height ();
rectangle ( grabbedImage , new Point ( x , y ), new Point ( x + w , y + h ), Scalar . RED , 1 , CV_AA , 0 );
// To access or pass as argument the elements of a native array, call position() before.
hatPoints . position ( 0 ). x ( x - w / 10 ). y ( y - h / 10 );
hatPoints . position ( 1 ). x ( x + w * 11 / 10 ). y ( y - h / 10 );
hatPoints . position ( 2 ). x ( x + w / 2 ). y ( y - h / 2 );
fillConvexPoly ( grabbedImage , hatPoints . position ( 0 ), 3 , Scalar . GREEN , CV_AA , 0 );
}
// Let's find some contours! but first some thresholding...
threshold ( grayImage , grayImage , 64 , 255 , CV_THRESH_BINARY );
// To check if an output argument is null we may call either isNull() or equals(null).
MatVector contours = new MatVector ();
findContours ( grayImage , contours , CV_RETR_LIST , CV_CHAIN_APPROX_SIMPLE );
long n = contours . size ();
for ( long i = 0 ; i < n ; i ++) {
Mat contour = contours . get ( i );
Mat points = new Mat ();
approxPolyDP ( contour , points , arcLength ( contour , true ) * 0.02 , true );
drawContours ( grabbedImage , new MatVector ( points ), - 1 , Scalar . BLUE );
}
warpPerspective ( grabbedImage , rotatedImage , randomR , rotatedImage . size ());
Frame rotatedFrame = converter . convert ( rotatedImage );
frame . showImage ( rotatedFrame );
recorder . record ( rotatedFrame );
}
frame . dispose ();
recorder . stop ();
grabber . stop ();
}
}
此外,在建立一個包含以下內容的pom.xml
檔後:
< project >
< modelVersion >4.0.0</ modelVersion >
< groupId >org.bytedeco.javacv</ groupId >
< artifactId >demo</ artifactId >
< version >1.5.11</ version >
< properties >
< maven .compiler.source>1.7</ maven .compiler.source>
< maven .compiler.target>1.7</ maven .compiler.target>
</ properties >
< dependencies >
< dependency >
< groupId >org.bytedeco</ groupId >
< artifactId >javacv-platform</ artifactId >
< version >1.5.11</ version >
</ dependency >
<!-- Additional dependencies required to use CUDA and cuDNN -->
< dependency >
< groupId >org.bytedeco</ groupId >
< artifactId >opencv-platform-gpu</ artifactId >
< version >4.10.0-1.5.11</ version >
</ dependency >
<!-- Optional GPL builds with (almost) everything enabled -->
< dependency >
< groupId >org.bytedeco</ groupId >
< artifactId >ffmpeg-platform-gpl</ artifactId >
< version >7.1-1.5.11</ version >
</ dependency >
</ dependencies >
< build >
< sourceDirectory >.</ sourceDirectory >
</ build >
</ project >
透過將上面的原始程式碼放在Demo.java
中,或類似地放在samples
中找到的其他類別中,我們可以使用以下命令首先自動安裝所有內容,然後由 Maven 執行:
$ mvn compile exec:java -Dexec.mainClass=Demo
注意:如果發生錯誤,請確保pom.xml
檔案中的artifactId
讀取為javacv-platform
,而不僅僅是javacv
。工件javacv-platform
新增了所有必要的二進位相依性。
如果上面提供的二進位檔案不足以滿足您的需求,您可能需要從原始程式碼重建它們。為此,創建了專案文件:
安裝後,只需呼叫 JavaCPP、其預設和 JavaCV 的常用mvn install
命令。預設情況下,除了 JavaCPP 的 C++ 編譯器之外,不需要其他依賴項。請參閱pom.xml
文件內的註釋以了解更多詳細資訊。
我們可以僅針對 JavaCV 運行mvn install
並依賴 CI 建置中的快照工件,而不是手動建立本機程式庫:
專案負責人:Samuel Audet samuel.audet at
gmail.com
開發者網站:https://github.com/bytedeco/javacv
討論群組:http://groups.google.com/group/javacv