虚拟化容器,大数据,DBA,中间件,监控。

flume将数据导入到hbase中

15 11月
作者:admin|分类:大数据

安装flume:

压缩包下载地址:点击打开链接

[hadoop@h71 ~]$ tar -zxvf flume-ng-1.6.0-cdh5.5.2.tar.gz
修改 flume-env.sh 配置文件,主要是JAVA_HOME变量设置
[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ cp conf/flume-env.sh.template conf/flume-env.sh
[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ vi conf/flume-env.sh
添加:
export JAVA_HOME=/usr/jdk1.7.0_25

(这里添加的是你的Java安装目录,我这里安装的是jdk1.7.0_25,注意Java版本不要过低)

验证是否安装成功:
[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ bin/flume-ng version

 

 
  1. Flume 1.6.0-cdh5.5.2

  2. Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git

  3. Revision: f65a722cd2d1e7ceeda972570a5a6ee01c3a0a3d

  4. Compiled by jenkins on Mon Jan 25 16:38:11 PST 2016

  5. From source with checksum 028a2c6b035a03df1dfa91a3feda3424


[hadoop@h71 ~]$ cd hbase-1.0.0-cdh5.5.2/lib/
然后将以下文件复制到flume中:
[hadoop@h71 lib]$ cp protobuf-java-2.5.0.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp hbase-protocol-1.0.0-cdh5.5.2.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp hbase-client-1.0.0-cdh5.5.2.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp hbase-common-1.0.0-cdh5.5.2.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp hbase-server-1.0.0-cdh5.5.2.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp hbase-hadoop2-compat-1.0.0-cdh5.5.2.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp hbase-hadoop-compat-1.0.0-cdh5.5.2.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
[hadoop@h71 lib]$ cp htrace-core-3.2.0-incubating.jar /home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin/lib/
(也可以直接把hbase-1.0.0-cdh5.5.2/lib下的jar包全部复制到flume的lib目录下)

 

 

确保test_idoall_org表在hbase中已经存在

 

 
  1. hbase(main):002:0> create 'test_idoall_org','uid','name'

  2. 0 row(s) in 0.6730 seconds

  3.  
  4. => Hbase::Table - test_idoall_org

  5. hbase(main):003:0> put 'test_idoall_org','10086','name:idoall','idoallvalue'

  6. 0 row(s) in 0.0960 seconds


[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ vi conf/hbase_simple.conf

 

 

 
  1. a1.sources = r1

  2. a1.sinks = k1

  3. a1.channels = c1

  4.  
  5. # Describe/configure the source

  6. a1.sources.r1.type = exec

  7. a1.sources.r1.command = tail -F /home/hadoop/data.txt

  8. a1.sources.r1.port = 44444

  9. a1.sources.r1.host = 192.168.8.71

  10. a1.sources.r1.channels = c1

  11.  
  12. # Describe the sink

  13. a1.sinks.k1.type = logger

  14. a1.sinks.k1.type = hbase

  15. a1.sinks.k1.table = test_idoall_org

  16. a1.sinks.k1.columnFamily = name

  17. a1.sinks.k1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer

  18. a1.sinks.k1.channel = memoryChannel

  19.  
  20. # Use a channel which buffers events in memory

  21. a1.channels.c1.type = memory

  22. a1.channels.c1.capacity = 1000

  23. a1.channels.c1.transactionCapacity = 100

  24.  
  25. # Bind the source and sink to the channel

  26. a1.sources.r1.channels = c1

  27. a1.sinks.k1.channel = c1


启动flume agent:
[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ bin/flume-ng agent -c . -f conf/hbase_simple.conf -n a1 -Dflume.root.logger=INFO,console

 

 

 
  1. 。。。。。。。。。(前面省略,太多了)

  2. :/home/hadoop/hive-1.1.0-cdh5.5.2/lib/logredactor-1.0.3.jar:/home/hadoop/hive-1.1.0-cdh5.5.2/lib/commons-dbcp-1.4.jar:/home/hadoop/hive-1.1.0-cdh5.5.2/lib/jcommander-1.32.jar

  3. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/home/hadoop/hadoop-2.6.0-cdh5.5.2/lib/native

  4. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

  5. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>

  6. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux

  7. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386

  8. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.18-194.el5

  9. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop

  10. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop

  11. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/apache-flume-1.6.0-cdh5.5.2-bin

  12. 12/12/13 00:21:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x1d630160x0, quorum=localhost:2181, baseZNode=/hbase

  13. 12/12/13 00:21:09 INFO zookeeper.ClientCnxn: Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

  14. 12/12/13 00:21:09 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /127.0.0.1:47668, server: 127.0.0.1/127.0.0.1:2181

  15. 12/12/13 00:21:09 INFO zookeeper.ClientCnxn: Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x3b8fd59e830000, negotiated timeout = 90000

  16. 12/12/13 00:21:10 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

  17. 12/12/13 00:21:10 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.

  18. 12/12/13 00:21:10 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started

  19. (启动成功。。。)


产生数据:
[hadoop@h71 ~]$ touch data.txt
[hadoop@h71 ~]$ echo "hello idoall.org from flume" >> data.txt

这时登录到hbase中,可以发现新数据已经插入:

 

 

 
  1. hbase(main):005:0> scan 'test_idoall_org'

  2. ROW COLUMN+CELL

  3. 10086 column=name:idoall, timestamp=1355329032253, value=idoallvalue

  4. 1355329550628-0EZpfeEvxG-0 column=name:payload, timestamp=1355329383396, value=hello idoall.org from flume

  5. 2 row(s) in 0.0140 seconds

浏览329 评论0
返回
目录
返回
首页
Flume学习之路 (一)Flume的基础介绍 Hbase表两种数据备份方法-导入和导出示例